Issue 37 – Automated Images
*Submissions now is due on November 30, to invisible.culture@ur.rochester.edu
The discourse around large, generative models (“AI”)—both their threat and promise—has reached an urgent salience in recent years, as images and texts increasingly populate and confuse our online spaces, classrooms, and public squares. When discussing the visual, there is often a focus on the ethical and political implications of a flood of believably photo-real images on already-shaky regimes of truth. Images made through automatic processes, however, were part of our visual landscape long before the introduction of AI models. For example, pattern recognition algorithms and computer vision create and capture images that power ubiquitous corporate and state surveillance.
Models like Dall-E and Midjourney translate textual input into new images using a model trained on gigantic image datasets derived from that surveillance and from the visual artifacts of popular culture. These new tools raise concerns familiar to the earlier mediated representation they so effectively mimic. AI models, created by humans and embedded with their choices (O’Neil, 2016), reflect the data on which they’re trained. This gives way to permutations of stereotype and caricature, racist depictions, and violence (Buolumwini, 2020; Noble, 2018). Against such forces, artists and activists have appropriated these tools to create new works that aggregate large image sets to oppositional or playful ends.
For our 37th issue, Invisible Culture seeks articles and artworks that address the broad category of “automated images” in their many valences. We seek work engaged with the social and political effects of automated images and welcome submissions that approach automated images with the enduring questions and methods of visual studies (ex: authorship, indexicality, and reproduction).
(more…)