When AI Looks Real
Image 1, the original “shed”.
Carrying on from the previous AI article.
For the photorealism category, I took a different approach. I selected two of my recent photographic successes — the night-lit shed and the solitary bench — and used them as the basis for a test. Could Adobe Firefly, guided only by words, recreate something visually indistinguishable from my originals?
Image 2, the AI generated version of “The Shed”
I used ChatGPT to help me construct detailed prompts describing the scenes: the lighting, composition, textures, and mood. Firefly then produced several versions which I refined through iteration and later adjusted in Photoshop. The results were striking — Images 2 and 4 (AI-generated) are not only credible as “real” photographs, they echo the feeling of the originals with uncanny precision. In fact, I find myself preferring the AI version of the bench; it conveys the same stillness but with subtly enhanced atmosphere and tonal balance.
Image 3, the original bench.
Here the ethical questions become more complex. These AI images look photographic — they depend on photographic understanding — yet they are not photographs. I hold the copyright to my original work, so my re-creation raises no issue. But what happens if someone else feeds my images into an AI system and generates “their” version? Would that be transformation or imitation? Inspiration or theft?
Image 4 - the AI generated bench.
AI image generation challenges our long-held ideas of authorship and originality. It reminds us that photography has always evolved with technology — from glass plates to sensors, from darkrooms to Lightroom. Perhaps this is just the next chapter. But as creators, we will need to decide not only what we can do with AI, but also what we should do.