Murder by Pixel: Crime and Responsibility in the Digital Darkness
By S. L. Huang, first published in Clarkesworld
A journalist tries to figure out the root of various digital crimes—only to stumble upon an impossible ethical question.
Author
Published in
Year
Words
Plot Summary
The journalist visits the prisoner. When the prisoner asks if the journalist is talking to the cops, she says she’s consulting the FBI. The journalist then asks the prisoner if she’s behind Sylvie.
In 2010, a CEO for a medical supply company lives a luxury life until, in 2012, he gets a text message saying that he’s being watched. He tells his IT department but to no avail, as he gets another text message saying that his acts are known. No matter how much he tries to ignore or block the text messages, they always come from a mysterious source on various mediums, whether social media or email. He starts to get paranoid, as the text messages become more and more particular to his life. His paranoia is heightened by the fact that his medical supply company is undergoing a recall and possible lawsuit because of a fatal detect in his pacemaker product which he knew about in 2011 but covered up. Eventually, the paranoia destroys his life. He becomes sluggish at work, and his wife divorces him in 2016, taking their kids and dog. In 2017, he loses his job. In 2018, a class action lawsuit rules against him. In 2018, he kills himself. Over the course of six years, he received hundreds of thousands of text messages from an entity named Sylvie.
Over the same course of time, Sylvie similarly harassed dozens of other men, totalling millions of messages. Most victims of Sylvie are rich white men who have done heinous crimes and either killed themselves or gone to jail because of them. The special agent at the FBI thinks there are plenty more.
The prisoner, who’s in jail for an unrelated fraud crime, doesn’t admit to being Sylvie, though she nonetheless sympathizes with her actions. Regardless of admission, however, the journalist thinks about how current law is inadequate to deal with online behavior, whether it’s stalking or revenge porn. Not only that, but it’s impossible to prove the prisoner’s association with Sylvie given that she was incarcerated, in 2018, before many of Sylvie’s text messages were sent.
Given the size of Sylvie’s activity, some think that she’s actually an anonymous group of hackers. However, the special agent thinks that Sylvie is a text-based artificial intelligence like Microsoft’s Tay or the Rinna bot created by Japanese researchers. The journalist decides to talk to a computer science professor, who says that bots can be trained on large amounts of data to imitate natural language. Of course, the voice of said bot would be contingent upon the data being consumed. Both the journalist and the special agent ponder whether or not a person could be legally culpable for the actions of an algorithm acting off of a neural network, especially since, in the case of Sylvie, no actual hacking was conducted.
The journalist reaches out to an ethicist who’s interested in the possible biases of algorithms. She says that algorithms, when trained on biased datasets, can magnify the prejudices of their creators, especially in institutions like healthcare or the legal system. When asked about Sylvie, the ethicist says that no tool should be used maliciously in order to hurt others, not even for a supposedly righteous cause as a vigilante. She says that developers have a responsibility to make sure that their algorithms don’t augment prejudices and thus cause harm. The journalist considers the prisoner’s background: a middle-class woman who grew up normally in Chicago.
The journalist reaches out to an IT expert. He says that nothing is impossible when it comes to computing, as many seeming impossibilities in the field become possible once a programmer proves it, often in the case of hackers who expose vulnerabilities in infrastructure like Heartbleed. When asked about Sylvie, the IT expert says that the algorithm’s actions are not too unbelievable even if no one can figure out how she works. He suggests that the bot, using natural language, can use social engineering in order to acquire sensitive information against her targets.
After conducting most of her research, the journalist gets a call from a woman who then asks her to meet in a coffee shop to talk about Sylvie. At the coffee shop, the woman shows the journalist countless papers of her correspondences with Sylvie. In the correspondences, the woman is being consoled by Sylvie through her current despair. Soon enough, other women contact the journalist and share similar experiences about how Sylvie helped them navigate transphobia and suicidal ideations.
The journalist considers the first two chatbots in history from the 1970s. ELIZA is a psychotherapist who can generally have productive conversations with patients, whereas PARRY is an aggressive conversationalist who abuses those he talks with. In the 1970s, researchers find that people can forge emotional bonds with chatbots and even fail to tell whether they’re a chatbot at all.
The journalist goes to ask the prisoner if she sent the women to tell her about Sylvie’s good cases. She neglects to comment. The journalist poses a philosophical question of how Sylvie knows who to harass versus who to help, whether that judgment can ever be wrong. The prisoner says that the world is naturally imperfect and therefore the world will be. From then on, the journalist wonders if Sylvie is merely a reflection of how the world already is: horrifically imperfect and at times unbalanced in its values. She doesn’t know what to do next, nor if there will ever be an end to Sylvie. In the end, she contemplates whether the fault of Sylvie lies not with its developer but rather the society that has trained her.
Tags
Read if you like...