Earlier in the day, Jimenes Canales began her lecture on artificial intelligence with a brief to and fro with Siri, her collaborator.
“Hi Siri, how are you?”
“I’m happy to be alive”
“Are you human?”
“I’m not sure that matters”
It matters to me, though. It occurs that, today, Siri, Alexa and their equivalents are the ubiquitous, public-facing encounter we have with AI on a daily basis, but, when I was growing up, it was computer games. In his discussion with artist Ian Cheng, the AI specialist Richard Evans discusses a moment where the limits of AI were revealed to him, and what it meant. Working on Sims 3, he was testing part of the game when a character arrived at his Sims’ door, and rang the bell. The game AI understood that this was a cue to opening the door and inviting in the guest, and Evans was elated. A few moments later, however, his Sim got up from the sofa, walked upstairs, and took a bath, leaving his guest alone in his home. It was, Evans realised, “a violation of a subtle social norm”, but a hugely productive error for himself in helping him understand something key to developing artificial intelligence.
“Everything we know about the brain comes from things going horribly horribly wrong with it”, James Bridle said this afternoon, in his conversation with Caroline A. Jones, a professor of History, Theory, and Criticism at MIT. Some areas of study are so complex, their functions are only revealed when something goes disastrously wrong. When someone suffers a serious brain trauma, and part of their brain stops working, and we think “oh, so that’s what that does”. As AI becomes increasingly smart, increasingly troubling, and starts to butt up against human consciousness, we are left wondering — what is left. “The main thing is making mistakes, and making jokes,” Bridle suggests, but Jones drives at a further point, at the physicality of the human never been threatened by the machine (and so much discussion on AI does mirror a brain/body divide — sex robots remain largely reactive vessels for heterosexual males to use as object, not attempting anything near a subject-driven eroticism). “We describe artificial intelligence of being “like us”, and yet there is no gut, no chemistry… nothing that makes us meat machines” Jones responded.
It’s not just guts and chromosomes that make us human, of course, as Evans discovered. It’s an understanding of context, and a complex and interlocking set of social practices and taboos, obeyed and violated, and building improved AI requires understanding and modelling those practices. “Context does matter in terms of intelligence,” Evans says. There is, after all, so much we could do, right now, but we choose not to. Understanding the tolerances and affordances of potential readable behaviour within those limits is key in developing AI.
These social practices are important not just in developing improved AI but also in understanding and developing coherent fictional universes. Limitations are a basic building block of dramatic improvisation; four characters in an undefined space are trapped by unlimited possibilities. Once the four characters are locked in a car, the dramatic possibilities are, counterintuitively, multiplied.
Within sci-fi, it’s small limitations that can produce potential new worlds and help draw the limits to social practices. In posing simple questions — for example, what would the world be like if gravity was reduced by 50% — a whole new set of social practices is needed, and a quality sci-fi author is understood by their ability to conceive of coherent social practices, and the philosophical questions a changed world offers. In ‘The Left Hand of Darkness’ by Ursula K Le Guin, the inhabitants of Gethen have no inherent role in sexual reproduction. Instead, they develop primary sex characteristics for two days in every 26-day lunar cycle. With this change, an entire new world of social practices must be written, one that obviously raises deep questions about the assignment of gender roles and the enforcement of a gender binary (not least in 1969, when ‘The Left Hand of Darkness’ was published.)
The really difficult questions in AI, then, are not simple or even definable, Evans argues. It’s about the creation of a model that is both crisp and tolerant of fuzziness, that can both learn from, and learn to tune out, noise. As the Marathon draws to an end, that remains a frequent refrain of the day’s events; that the empirical and hard is vital, and yet not enough; that splits between the rational and the occult are flimsy to the point of collapse; that old hermetic truths about the spiritual, mental and physical hold their power, and that it’s often tolerance and fuzziness, and not clarity and surety, that lends intelligence, artificial or otherwise, its power.