Initial Exploration of (and Reactions to) AI and Adobe Firefly

Despite my limited experience, when I think about AI I do so with a prevailing sense of foreboding and depression. As a photographer, designer and general creative I feel caught in a bind between having to learn new techniques and creative processes, and have to use the very process which might spell my doom! AI is certainly something of great potential, and a tool which is being developed at an astonishing and ever increasing rate. However, it also feels like — both in society and in the creative industries — as if we are racing towards a precipice. This is not however because of the technology, but because of the aims of its creators, a lack of understanding and preparation within society for this brave new world, and our nebulous future once the changes and true ramifications of this new technology have been understood and realised.

A daguerreotype on a shipwreck generated using Adobe’s Firefly 2 engine. The original work was actually in colour, despite having the daguerreotype style selected, so I had to further edit the image afterwards in Adobe Photoshop to remove the colour and give the image it’s period sepia toning.

 

Comparisons with Britain’s Industrial Revolution

I remember back to my days studying British history in secondary school, we covered the industrial revolution in detail and how it drastically changed British society. Within a generation people had moved from rural dwellings where they eked out a subsistent living through small cottage industries, to giant monolithic cities with their factories and mills. This fundamental change in our society did not necessarily benefit the people who’s lives it predominately affected. The people who had until recently lived in a hovel and subsist on cabbages and the occasional potato were still poor in the city. They had just traded their small business — typically a loom in the front room — for a wage and a shift. And whilst there were benefits, and the changes the industrial revolution ushered in did allow society to develop at ever astonishing rates, the benefits were mostly felt by the mill and factory owners and their investors. For the average worker conditions were still hard and wages were often poor.

Colour engraving of Ned Ludd, the leader of the Luddites.

Colour engraving of Ned Ludd, leader of the Luddites who were a 19th-century movement of English textile workers who opposed the use of certain types of cost-saving machinery, often by destroying the machines in clandestine raids.

Thinking about AI and its potential implications, given time to develop (and I fear this will not be long), are potentially colossal. However a big difference between the AI and the industrial revolution is who it affects and the lack of preparation within society to help mitigate its implications. The industrial revolution affected mainly the working class and of little formal education. At the time the middle class in Britain was relatively small (mainly bankers, teachers and doctors, etc.). The AI revolution however seems to be targeting today’s white collar workers, and the higher and creative professions. In short if you work in one of these higher professions, which take skills, experience, education and aptitude — then you should worry about AI and if it can potentially replace you. Now this might not be a problem if it was not for society’s global free market economic structure and the general workings of capitalist societies — whereby I work to earn money to spend on materialist things. If whole swathes of society are to be replaced by man-thinking robots, then we are going to need some other way to earn our living, or reconcile ourselves to a life of enforced luxury and a citizens wage (which it is your duty to spend on trinkets and whatnot). However, this does not seem to be happening any time soon, or certainly not before AI develops itself into the leviathan it can become, which makes me think back to those history classes in school where we also covered Ned Ludd and his pesky Luddites who sabotaged their machines in rebellion against the industrial revolution’s perceived threat and against the factory owners who used their machines to replace the skilled labour of workers and drive down wages by producing inferior goods.

A grand piano shipwrecked on a beach.

The creativity in using AI is all about the quality of the prompt you give it. Think about it as instructions. Here I told the AI to create an image (photograph) of a shipwrecked grand piano on a beach in a daguerreotype style. Working my shipwreck concept I realised that it’s really about abstraction, shipwreck items on a beach are really items which are out of place or which should not be there.

So am I a modern day luddite? Part of me sincerely hopes not, as it is a mark of my profession that I have dedicated much of my working life to being close to the forefront of technology and learning new skills, techniques and creative processes. I am not a refusenik, and always want to learn something new. In the art, design and creative worlds you have to be this way, and this process of perpetual learning is engraved into us from a foundational level. Indeed I have even been able to use this to my advantage, as new processes invariably increase capability and in places have effectively reset that part of the industry, thereby opening it up to new people. For example I remember (and hating) being forced through technical drawing in secondary school, where we had to draw three dimensional objects from different angles, and with perspective! However, throughout my entire career, which has even included working with interior and architecture students at a university level, I have never really needed these skills which I learnt in school. Computers and developments in software made such skills semi-redundant, so long as you understood the underlining principles and were thoroughly trained in the software. The same is true with my photographic education. Whilst I spend much time in a darkroom and processing film (which I really loved), I would never consider myself a master, and many of the advanced image manipulation techniques of yesteryear — like airbrushing Stalin’s withered arm — were already consigned to the history books and / or replaced by Adobe’s digital darkrooms! This has meant that I have not had to master these techniques, and have instead learnt to master Adobe Photoshop (wizard with the Clone Stamp!).

So this is both my rebellion and my confession. A confession that I have, like the dodo walking to its doom, started to flirt with AI and test its potential. Having watched Evgeniya across our studio these past few week exploring ChatGPT and Adobe Firefly 2, I have inevitably succumbed to my ingrained fear of missing out and have decided to bite the bullet! And with all that in mind (yes, I know it’s a lot to digest), next will follow some first thoughts and initial reactions / commentary about Adobe’s Firefly 2 AI engine.

 
AI generated daguerrotype of a birdcage washed up on a beach.

Another scene from my daguerreotype shipwreck concept series. This time I instructed Adobe’s Firefly 2 engine to generate the scene with a birdcage washed-up on the beach. To make this work you really have to think creatively about what sort of items might come from a shipwreck!

Initial Impressions & How Things Might Develop

Adobe’s Firefly 2 engine (along with its original Firefly engine) is available through its website, but the experience you get and your access may vary depending on where you are located. Some users may only be able to access the original Firefly engine, and the whole thing is very much still in development. The AI is constantly improving (it’s slowly becoming smarter than you), as are its tools, features and the processes which Adobe is prepared to offer — so don’t expect things to remain the same for long, and I expect that in future we will see better integration into each of Adobe’s core programs. Currently the latest edition of Adobe Photoshop (version Adobe Photoshop 2024, 25.2.0 Release) has some limited AI integration, allowing you to conduct certain tasks and / or image manipulations through its new Contextual Task Bar and panel. However, I feel that this is just the beginning, so we’ll see how things develop!

Using Reference Images & Creating Shipwrecks

Adobe’s Firefly 2 engine has the ability to add a reference image which is particularly useful for instructing the AI to produce an image in a particular style, or with a specific look. When first experimenting with the AI I decided to upload an image from a recent holiday to Southwold and Walberswick on England’s East coast. The weather was particularly stormy: with violent surf, muted colours and tempestuous looking skies! This is also where my shipwrecked theme was born, as this particular coastline is long famous for its list of shipwrecks throughout the centuries, and on a previous childhood holiday in the area we discovered a shipwreck just a few miles up the beach near Covehithe and Benacre Broad.

The work the AI created was not too dissimilar to my reference image, the AI took my photograph and produced it’s own image in the style of the original. It used the same weather, shingle on the beach, violent looking surf and stormy sky; but added an already wrecked and slightly rusty looking shipwreck lying broadside to the beach and with a large wave breaking over it’s stern. The viewing angle of the beach was from the other direction (the beach in my photograph was on the left, whereas in the AI generated image it was on the right) and the background details, although these were small and a ambiguous out of focus monologue, were a complete fabrication.

Overall, I would say the AI did a very good job balancing fact and fiction. The shipwreck it created was mostly convincing, and looked like a late-19th century ship which had been wrecked several years before, and the sea had been allowed to work away at the wreck, slowly breaking it up further and rusting its presumably ironclad hull. The major elements of the images, e.g. its composition were a fiction, but the low viewing angle (as if the viewer is almost crouching in the surf) it took from my reference image. And because the shipwreck is viewed from fairly far back, it is difficult to spot any mistakes or random artefacts which the AI is currently prone to developing (some of the rigging is a bit suspect, but then it is being blown around in stormy weather).

Colour AI generated photographic scene of a shipwreck on a stormy day.

As you can see I have a thing about shipwrecks! Here is another shipwreck scene created using Adobe’s Firefly 2 AI with a reference photograph from a recent holiday to Southwold and Walberswick on England’s east coast.

 

Selecting The Right Image & The Battle Of Syntax

Having generated my first image of a shipwreck using a reference image, I decided to continue to develop the concept further with modified prompts. Here I replaced words in the prompt, for example ‘shipwreck’ with ‘flotsam and jetsam’ or words like ‘driftwood’ to generate different responses which worked the theme. And because I kept the same reference image, those responses were in the same style as the first shipwreck image.

Adobe Firefly generates multiple responses to the prompts which you give it, and then you select the results you want to keep either by adding them to your Library or by downloading the image directly to your computer. I found the best tactic was to carefully select the images so that they all appear to be from the same shipwreck scene, but I had to be careful because every result the AI generated was different. You have to be spatially aware, like a film director controlling his cameras, and understand the space within the scene so that you do not include any wrong elements or compositions when generating a series of images. This was partially noticeable with backgrounds (every time they were different) and I had to select images which were on different angles and / or directions so that the scene looked the same.

 

The Tale Of The Dragon’s Teeth

It was at this point that I discovered that Adobe Firefly struggles with some words and phrases. For example, it does not really understand what is flotsam and jetsam! — in nautical lingo it is the debris washed ashore (or drifting) from the shipwreck. So I had to experiment here and see what words it would understand and what it would generate based on my modified prompts. The AI words works with specific prompts, for example saying ‘driftwood’ works better than ‘flotsam and jetsam’ because it can understand what driftwood is without having to interpret things creatively. This is because at its heart the AI is programmed as a creative tool, and therefore if it does not know what something is or has a unclear instruction, then it will do its best to interpret and / or create — much like a person, but sometimes with questionable results! For example, as part of a later experiment, I uploaded a reference image of a sandy beach with sand dunes and asked the AI to generate me a picture of a World War Two beach with ‘dragon’s teeth anti-tank defences’. However the AI clearly struggled with its interpretation of ‘dragon’s teeth' and created me something which looked like it was out of Game of Thrones!

Dragons on a beach generated using Adobe Firefly

Now if it was a full sized fire-breathing dragon, then it might stand a chance of stopping the German invasion in 1940! However, these miniature dragons which Adobe’s Firefly 2 AI generated for me are too small and certainly not the dragon’s teeth anti-tank defences which I asked for!

 

Despite being a creative tool — and one which in theory can learn, and one which I imagine has access to mankind’s greater lexicon of knowledge — Adobe Firefly AI is currently not very good at generating historically accurate objects and subjects. It’s very good at creating realistic looking images, especially with well prepared prompts and select reference images — but if you want historical or technical accuracy, then you are not going to find it here yet.

To further prove my point and test the AI, I decided to generate some images for a project I have been working about my grandfathers service in World War Two on HMS Ausonia and in the Battle of the Atlantic. I asked the AI to generate an image (photographic) of the interior of a German u-boat, and even gave it a reference image from the interior of a Type II u-boat (Vesikko, Helsinki). However, what it produced looked like something out of a steampunk anime, complete with futuristic controls and even windows — yes ‘windows’ on a submarine, Nemo would be proud!!

Undeterred, however, I persisted and uploaded a different reference image, this time of the outside of said u-boat. The image was clear and contained the submarines deck, conning tower and periscopes — and the AI was clearly instructed to generate an image of a u-boat in stormy weather. However again the AI used a hefty dose of creative license to create its scene, and the resulting u-boat again looked like an escaped Pirates of the Caribbean character combined with a hefty dose of steampunk. The generated u-boat had portholes (something u-boats do not typically have), masts and rigging, and even appeared to be coming ashore, like a floundering whale!

Eventually we got there and Adobe Firefly 2 managed to produce a scene of a World War Two convoy. The stormy effects and atmosphere give this image an ambiguous feel which disguises some inaccuracies. For example most bulk carriers and tankers of the period had their bridge and major superstructure up forward (near the forecastle) with the funnel and aft superstructure close to the stern. The decks do appear to have davits and cranes for unloading cargo, which was an important feature on ships in the day because port facilities did not always have large cranes — but again the details here are unclear or masked.

 

Lost In The Jungle — Some Further Thoughts About Accuracy

Moving on from my failed u-boats and dragon’s teeth experiments, I decided to change direction and create a jungle scene using a reference photograph which I had taken several years back on Tenerife. I prompted the AI to produce images with various different animals which I specified in the prompt. Overall, the results were very good — with the generated images appearing realistic, and in the style specified by my reference photograph. However, as to whether any of the animals in these generated images are indeed accurate I could not tell you, suffice to say they are probably not. They are certainly very good simulacra, and you certainly can say the reptile looks like a reptile and the rendition of a parrot looks true to form, but here is where it ends, because I suspect what has been created is not an actual bread or specific species. If I was to put these images in front of a qualified zoologist then they would probably laugh and instantly recognise the images for what they are.

This is something which society will need to grapple with as we sort fact from fiction; and no doubt there will be incidents along the way. For example what if technically inaccurate images like my jungle scene were used in a school textbook or encyclopaedia? This has the potential to further distort human knowledge and understanding of the world arounds us, especially considering that because of our increasingly unnatural digital lives we are experiencing less and less of it.

Looks convincing right, but is it a real lizard? I suspect a qualified zoologist would be able to spot the difference! The point being that the images which Adobe Firefly 2 creates certainly looked realistic to the untrained eye, but at the engine’s heart it is creative, and therefore will endeavour to create something rather than just copying a facsimile of something which already exists. However, this makes the engine difficult to use if you want to create anything factual (at the moment). These images for example should probably not be used in an encyclopaedia or school textbook!

 

Some Final Thoughts (But Not The End of My AI Experiments)

These are some of my initial thoughts, impressions and experiments working with Adobe’s Firefly 2 AI engine. It is already a powerful tool, and one which is being developed and integrated rapidly. In a few years time the processes, type of work and speed at which we can accomplished tasks will change drastically. But there are many implications which need to be considered here, especially the value of work. On one hand AI is a great opportunity to boost capacity and productivity, but that could also be a further downgrading in the value which is placed upon creative work. I feel like everyone should have a little creativity in their lives — but if everyone can be called a creative because they can command an AI, then where does it leave the true creatives and people who use their hands? I suspect time will tell, and rather sooner than later. In the mean time, I will do the only thing which I can do, and that is to develop my skills using AIs and see how I can integrate them into my workflow and creative process.

 

We have got some educational content about AI in the pipeline, but in the meantime why not explore some of our other classes like my Beginner’s Guide to Colorizing Old Photographs in Adobe Photoshop (available on Skillshare and Teachable) where I look at Adobe’s Colorize neural engine in more detail and explore how it can be used in an image colourisation and restoration workflow. And to compliment this class, I have also got a class Beginner’s Guide to Retouching Old Photographs in Adobe Photoshop (available on Skillshare and Teachable) where I explore Adobe Photoshop’s tools for restoring old photographic prints and negatives. Both of these classes are also available as a Photo Restoration in Adobe Photoshop bundle on Teachable.

Previous
Previous

My Life Working With & Building Pinhole Cameras!

Next
Next

A Guide to Colourising Photographs Using Adobe's New Colorize Neural Filter in Adobe Photoshop CC