Swamp Futures
Generative AI, Midjourney, Runway AI, Photoshop Beta



Swamp Futures is a personal project exploring generative AI through an imaginary future world in 2424 where the entire planet is a swamp due to climate change. The people have learned to use biodesign to incorporate wildlife into their everyday lives. Below is an explanation of the process and tools used to create this visual story.

I am archiving the images I generate here @swampfutures


The main tool used for the Swamp Futures project is Midjourney. Getting a consistent style and quality is tricky as the results from the movement of a comma or an additional word creates wildly inexpicable different outcomes. Midjourney’s algorithm takes a surrealist approach which in many cases means the way it interprets a prompt does not place it in the real world.


 


The below prompt in Runway AI only incorporated what is called “ambient motion” which amounts to the subject breathing and blinking their eyes. Once any additional movement is added like panning the camera or the subject walking, the distortion of the image/animation is quite extreme. While static image AI has made major strides in the past year, motion AI still has a long way to go in terms of being useful.





Another technique available in Runway AI is moving only a portion of an image. Motion of non-human subjects leades to less distortion but if you look closely, the figure on the right side gets distorted as the leaves shift.



Midjourney images often come with warping or distortion. Midjourney especially struggles with faces and hands. Many images I’ve generated that were otherwise perfect contained subjects that were cross-eyed. Below is how I have edited images to clean up distortions with Photoshop Beta.





And while the below images didn’t make the cut for this concept, it is a good example of how the subject matter can be the same with a very different tone. In this case, the tone is more camp-y in an almost David LaChapelle style.


Atlanta, GA, USA  33° N -84° W