The article I’m writing about stemmed from an incident that occurred in 2015 which brings to light the question: where does the line get drawn between human and computer culpability? So what was this curious incident that had us questioning if robots can be arrested? The article reports of an algorithm that went shopping on the darknet, had the purchases delivered to the studio of the artists (!Mediengruppe Bitnik) who then set up an exhibition to show off the purchases their bot had made. The day after they took down the exhibition, they found the Swiss police arresting the robot (seized the computer) and confiscating all the items purchased. The bot and the items, save the ecstasy, were released within three months but had people wondering who is truly to blame.
The people who had developed Random Darknet Shopper, the name of the algorithm from the incident above, defended themselves saying that they had created the bot for the purposes of experimentation. But was their decision ethical? I think that knowing beforehand that your actions might lead to a “crime” and still proceeding with your intended task should be considered unethical. I’m sure there must be some way to explore their curiosities and conduct the experiment without actually breaking the law. Maybe the artists could have spoken to an external party and had them approve their purpose and process in order to conduct a controlled experiment – this could have helped them find the answers while also keeping them out of trouble.
So maybe the decision was unethical but should it be legal for the developers to create something of this nature? In my opinion, if the bot could be deployed in another environment where the likelihood of the purchases to be illegal is much lesser than it is on the darknet, then they did no wrong in creating the algorithm. If their true intention was to study trust and in fact not buy illegal items, then it should be evident and they shouldn’t be penalized for their curiosity and ability to creatively channel it. But of course, certain borders need to be defined to set limits to what kinds of “unintentional” crime can be pardoned.
When I had first read that a robot was arrested, I visualized authorities taking away one of the classic computerized sidekicks we see in sci-fi movies. But to get back to the point at hand, it did make me question whether we now have strict, established rules about what robots can and can’t do. I personally don’t think we’re at the stage yet where there’s a universal moral code that explicitly states what actions can make a robot “criminal”. I think, for now at least, it’s the developers (and/or owners) and their intents that will be scrutinized when illegal activity is performed by an algorithm. But take a moment to imagine a robot presenting itself at court and being persecuted for crimes committed against humanity (or other bots?!). What kind of sentence would be just for a bot? Terminating the software perhaps?
Professor Schafer from the University of Edinburgh draws an interesting analogy to persuade that the liability of harm done by a robot should be similar to how we question the liability of an electric drill that caused injury – we see if it was the manufacturer’s fault or the owner’s. This is a fair remark if we perceive bots to simply be tools built to assist and add simplicity to our lives. From the reported incident we’ve been discussing above, we shouldn’t be questioning whether Random Darknet Shopper should be held responsible but look into the developers and the artists who owned the bot. After all, the robot only carries out the orders given to it by the human (for the most part). It only gets complicated when the bots can self-learn and do something that the maker or owner did not expect, which wasn’t the case with the Random Darknet Shopper incident. I think it’s also fair for Schafer to add to his point by saying that the existing legal system we have in place for dogs can be applied to bots as well. Incriminating a smart bot would be to personify it, treat it like its human that has its own mind, own intentions and is aware of its actions and implications – which is not the case, especially with this specific piece of software that was caught for buying ecstasy. Just like it is with dogs, the person who places the danger in the environment is responsible for any harm caused.
Coming back to the main question, who should be held liable and under what conditions?
- If the illegal activity was committed as a direct result of the code written, and it was something that was expected to “malfunction” the way it did, then the developer should be held responsible.
- If the incident occurred due to a physical fault in the design of the computer, then the hardware developer should be accountable.
- The system designer would be at fault if they failed to catch an error in testing since they are responsible for successfully integrating the hardware and software components.
- The seller should have taken part in ensuring that the people in charge of building the product did all the testing before they sold it.
- If the client/owner intended to use robot for malicious purposes, then they should definitely be charged.