Real artists are not being replaced by machines just yet
I know I’m not the only person who is a wee bit disturbed by the current popularity of the Lensa app which has people giving this relatively unknown company multiple photos of themselves so that their AI bots can work magic. Lensa is owned by the software company Prisma, based in Sunnyvale, CA. The app was first launched in 2018 as a digital background remover (the one thing it does reasonably well) and recent additions have marketed it as a selfie-improvement app. And the Internet, being as vain as it has always been, was quick to notice. Lensa was the #2 most-downloaded app in the US this past week.
HOWEVER, AI isn’t perfect and we’re not seeing all the images that the app creates. One dear friend was brave enough to share one of Lensa’s less-than-stellar results.
Mangled fingers, crossed eyes, and a chunk missing from her left arm are just the most noticeable problems with the AI-produced image. [Significantly better pictures of Shannon are in my book, We Did It In The Tub. Click the link to purchase your copy!]
AI is flashy and new and easy to use without knowing a damn thing about digital imagery. I understand why it’s so popular. As I was discussing with a friend who just happens to be a therapist (she was a friend first), artists have had to deal for centuries with the vanity of people not wanting their portraits to look “too real.” We want what we perceive to be our flaws covered or masked or, at the very least, diminished to the point no one notices them. Portraits of world leaders from the 14th century and beyond can hardly be considered authentic because being too accurate could cause an artist their job, or possibly even their life! What the AI is doing isn’t new, just faster!
So, how do we respond to this challenge? I can only answer for myself and thanks to the effects of that lovely chemo pill I’m taking my answer has to be brief. Let me show you how an image progresses in my hands. We’ll start with one from 2009. Here’s the original RAW image:
The image was shot using natural light relatively late on a summer afternoon. This is real. If we’re being totally honest with ourselves, there’s nothing here to not like. She’s a beautiful young woman.
Now, here’s how I originally edited this image in 2009:
One can see that, at the time, I chose to remove the puffiness under her eyes, darken her skin tone, and give just a minimal amount of balance to her flesh tone. There’s not a lot of editing here and I doubt I spent more than 30 minutes with the image.
Now, what happens when I drop the same image into the current version of Photoshop with all its built-in AI tools and let them run havoc over the whole thing? I couldn’t bring myself to show anyone that original image. It was a mess! Her features were completely blurred, the new background was totally inappropriate for the image, and the highlights were completely blown out. Nope, you’re not seeing that one.
However, when Photoshop drops a bomb like that, the image is still recoverable! All the effects are added in layers, so each piece can be manipulated until one achieves a suitable image. Is the process fast? Oh hell, no! My sick ass spent roughly four hours fixing that mess. Here’s the end result:
I’m still not sure what’s going on with that background. While it’s better than the first one, it gives the appearance that she’s a giant floating among the trees, or something of that nature. I’m also reasonably sure that the dear girl in question has never worn that much makeup, or at least not that color, in her life. While I know who it is, I doubt facial recognition software would identify her. At this point, this is a picture of a different person.
Okay, that’s one image. But would the results be the same with one that was, let’s say, shot in the studio? Well, let’s find out. Here’s a raw image shot in the studio in 2010.
I only processed one image from this series before now, so I don’t have a comparison shot to show you. I turned AI look with the basic “clean up” of the image, removing background spots and noise. Here’s how it handled that task:
Uhm, okay. The ways it “softened” her features really aren’t acceptable, in my opinion, but for the sake of the experiment, I’ll let it go. For now. The photo is still rather bland, though, so I let AI select a background for the image. Here’s how that went:
Ugh. Holy perspective, Batman! There’s a giantess in the middle of the road! The sun is still behind her (allegedly) and there’s no shadow!I Shadows are something AI seems to struggle with quite often. I did some work on the perspective by hand and then let AI take over once more. I gave it instructions to change her hair and eye color and to make the image more electric. The results returned required a lot of intervention, especially in blending the various layers. Four hours and a nap later, here are the final results:
Well … at least you can see the “electric” part. But who is that person? I am not sure how to begin describing everything going on in this image! And once again, the model is indistinguishable in this presentation. I know I wouldn’t want this thing floating around as something I’d done by hand! This is … something less than acceptable.
The one photo from this series that I processed, looks like this:
I assume you can see and appreciate the difference.
Whether we like it or not, AI is going to be a part of digital photo processing. There’s no escaping it. What’s important is that we make a distinction between the real and the fake. Since “deep fakes” are already an issue, and AI is only going to complicate that realm, I won’t be surprised if at some point in the future there has to be some legally-binding declaration of authenticity on ID photos and the like.
In the meantime, I’m going to keep doing things the hard way, taking my time, fussing over this and that, and giving the AI something to watch.
The House Next Door Caught Fire
A discouraging way for 2021 to end
I was sound asleep this afternoon, trying to get in a good, long nap so there might be some hope of actually staying up until midnight. I was mildly aware of the dogs being restless, the sound of diesel-fueled trucks in the neighborhood, but that’s not uncommon around here so I ignored it. Then, there was a knock at the back door. That is extremely uncommon. Our yard stays locked down because of the dogs. To knock on our door means jumping the fence.
“Get out! The house is on fire!” the boy yelled.
I jumped up and ran to the door.
“Get everyone out!” he said. “The house next door is on fire!”
I quickly turned around and yelled at the kids. “Out the door! Now! The house is on fire!” I grabbed the dogs and tossed them in the van. We quickly exited and pulled across the street. At that point, I wasn’t sure but what it might be our house that was on fire. It was a risk I couldn’t take.
Only once I was sure that our house was reasonably safe did I run back in and grab things like the kids’ shoes. I stood on the corner with all our neighbors watching the flames fully engulf the house, one that was largely a carbon copy of ours. At least three, possibly four fire units from both Indianapolis Fire Department and the Speedway Fire Department had responded. The streets were full of fire trucks and hoses and what seemed like 50 firefighters.
For the first time ever, my camera had not been among the things I thought to grab. I thought about it, but it didn’t seem as important this time. I caught a couple of pictures with my phone, and waited, trying to keep the dogs calm and answer the kids’ questions.
Only after the fire was mostly out and the kids and dogs safely back inside did I grab the camera and take a few pictures. Somehow, this seems a fitting ending to my year in so many ways.
As I’m standing there with my camera, my phone buzzed. Betty White had died.
Fuck this. If this is how 2022 is going to come in, we’d best brace ourselves for the toughest year yet.
[tg_masonry_gallery gallery_id=”18605″ layout=”contain” columns=”4″]
Share this:
Like this: