Why pictures?
Chapter One: Why social pictures? With Nathan Jurgenson
Thank You for Watching My Art Online
Algorithms Without Vision

The change in the social functioning of photography is a fact, but the underlying image-distribution networks are not neutral. If we agree that software is part of the apparatus responsible for the production and circulation of images, we cannot forget that it serves more than improving communication with our relatives and friends. It is also a part of an extractivist logic that is crucial for the functioning of contemporary communication networks, extracting information about our behavior and emotions from the content we create – images included – but also the digital traces we unknowingly leave behind.

Of course, the alliance between photography and surveillance systems, and even more so the classification practices used by various centers of power, are nothing new. And yet it would be a cliché to suggest that the algorithms monitoring the global circulation of images remind us of less noble uses of photography than communication. The point is rather that it is impossible to look at them that way, as they remain, to a large extent, black boxes. And yet we know that how algorithms see us matters, because they watch us more often than humans do. They are part of a more complex cascade of gazes, influencing, in turn, what we are shown.

Since image-recognition systems are non-transparent, what remains are reverse-engineering experiments. As a simple exercise, I uploaded Marta Ziółek’s opening images for this series to several web services. Google Cloud, which “derives insights from your images,” doesn’t know how to deal with these images at all. The braids turn out to be earrings. But that’s not the only problem, because the algorithms recognize not only people, items of clothing, and objects, but they also classify emotions. The standard set used by the algorithm (joy, sorrow, anger, surprise) is insufficient – the listed emotions turn out to be “unlikely” or “very unlikely,” and only “surprise” from the top photo is “possible.” The algorithm is confused, unable to cope with the classification. And yet, given the growing importance of such automatic emotion-recognition systems, such confusion, not to mention possible errors, can have real consequences. By the way, it is this problem that was ridiculed by the researchers at Dovetail Labs, who created emojify.info , a website that allows you to have a “face duel” with the algorithm. It’s worth checking out, to see how clumsy the models can be when they assume that the deviation of the corner of the mouth or the position of the eyebrows can precisely define our mental state.

Such mistakes probably have a greater impact on our imagination than the helplessness of Google Cloud – especially since all sorts of biases are revealed. This is well illustrated by the experiment with PimEyes , an algorithm-based service that reportedly does a record-breaking job in finding similarities between uploaded images and photos on the web. Its business model is based on an image-control service – the idea is to search for images that resemble our own, and possibly allow us to intervene when they are used without our consent. The problem is that once Martha’s photos are posted to PimEyes, the screen is flooded with porn. Algorithms don’t understand context, and they associate a woman with parted lips and an outstretched hand with pornography – perhaps the only form of transgression known to software (we can spare ourselves jokes about the sexism of the IT industry). Unlike in Rob Wasiewicz’s work, there is no room for humor or irony here – no casseroles or giant women devouring subway cars. Moreover, in an attempt to better understand the logic behind such choices, I took a selfie while emulating Martha’s gesture – lips parted, hand outstretched. Less than a second after submitting it to PimEyes, the screen was covered with aptly chosen photos of myself. The only mistakes depicted guys similar to me, in public speaking situations. So much for biases. You know: a guy with a beard and glasses usually opens his mouth in order to say something into a microphone; a woman – to subordinate herself to male satisfaction.

Why does this matter? First of all, because just as (let's have it, let's try to include a bit of the humor that machines lack in this gloomy argument) subway cars run on rails, the images circulating among us are, to a large extent, directed by similarly automated software. The limits of its imagination become our horizon. Secondly, as Vladan Joler and Matteo Pasquinelli write, such an algorithmic “undetection of the new” condemns us to look in everything for what already was. Always finding well-recognized patterns, and thus repeating old mistakes. Not very useful in times of crises that may have had no precedents in history (and not to mention sustaining “good old” sexism).

In her latest, excellent book Atlas of AI, Kate Crawford presents her journey through the places that reveal the backstory – usually invisible to us – of the functioning of new, “smart” technologies. The author visits lithium mining sites, but also the archives of the government agencies that make mugshots of arrested individuals available to the cybercorporations that use these images to train facial-recognition algorithms. In a poignant account, reviewing photos of people at difficult moments in their lives, Crawford shows the effects of the lack of broader discussion of the issue, denouncing “the unswerving belief [of the tech sector] that everything is data and is there for the taking. It doesn’t matter where a photograph was taken or whether it reflects a moment of vulnerability or pain or if it represents a form of shaming the subject. It has become so normalized across the industry to take and use whatever is available that few stop to question the underlying politics.”

Crawford – probably known to photography enthusiasts from her collaboration with Trevor Paglen, whose projects touch upon, among other things, automated vision systems – writes about a paradigm shift different to that described by Nathan Jurgenson. It is the shift from image to infrastructure, where context once again ceases to matter – the stripped-down images are thrown into an immaterial machine which squeezes out the data that allows the system to function. It is the grim reverse of the process of socialization. From this perspective, it seems important for creators to regain control over their images. Without that it is difficult to speak of the true democratization of photography and of the growth of its social dimension. Even if, for most of us, these disturbing processes remain invisible – or, as I mentioned, precisely because of it.

Can You See What I am Sharing?
So What?
Can you see me now?
A romance between aesthetics and physiology
Hello! Can you hear me?
I open my mouth

I open my mouth. I let my lower lip relax and drop. I relax my jaw, my cheeks. I close my eyes. I turn my eyeballs toward the back of my skull. I feel ripples all the way from my tailbone to the back of my head. My body is all in motion. I feel it from the base of my feet to the root of my tongue. My tongue droops, my hands reach out, opening my body, zooming in and negotiating space. I allow my eyelids to open. I see and feel through my skin. By way of my tongue, it emerges from my mouth. I feel a vibration down my body.

Today, our epidermis, the mask we wear, and the air we breathe, have become the new established boundaries. By revisiting the basic choreography of the mouth and the physiology of the female body, I mediate historical gestures, questioning the violation of bodily boundaries, the kinetic and the tangible in the image. My body is frozen in gesture, between one bodily movement and another.

Credits 1

  • 1: Costume: Joanna Hawrot in collaboration with Rafał Domink, Photo: Karolina Zajączkowska