Over the weekend, news of Hitchbot's beheading shocked the Internet. The cute, kid-sized robot had been hitchhiking across Canada, Germany and the Netherlands, and was about to take on America.
Its goal was to travel from Massachusetts to California, but its journey was cut short just two weeks in, by vandals in Philadelphia, a.k.a. the City of Brotherly Love. Hitchbot's arms and head were torn off, with the headless body seemingly left for "dead":
Hitchbot fans are trying to piece together clues as to who the perps are, some claiming they have surveillance camera footage of the event.
In recent weeks, there's been a lot of debate over the danger of killer robots. But what if it's violent humans that pose a more immediate threat?
"Usually, we are concerned whether we can trust robots, e.g. as helpers in our homes," Hitchbot's Canadian creators told The Atlantic in 2014. "But this project takes it the other way around and asks: can robots trust human beings?"
The resounding answer to that query seems to be 'no.' This isn't the first time we've seen human-on-robot violence. A recent study showed kids liked bullying and harassing mall robots. Some didn't stop when the robot told them they were hurting it. Other humans have been caught on video kicking robodogs.
Researchers and ethicists have thinking about how these types of questionable behaviors might impact society for some time. In 1999, Pete Remine founded a Seattle-based robo-rights project, dubbed the American Society for the Prevention of Cruelty to Robots. Its purpose is "to ensure the rights of all artificially created sentient beings."
Another robotics researcher argued in a paper in 2010 that if we didn't deal properly with the issue, "abuses towards robots may become a serious hindrance to their future deployment, and safety." At the time, most artificial intelligence researchers thought he was a bit of a kook, but now others, like the University of Washington's Ryan Calo, a cyberlaw expert, are backing him up:
And the public, too, seems less tolerant of, or at least disturbed by, robot cruelty. There's a whole subreddit dedicated to robot's rights. When news of Hitchbot broke, fans flocked to Twitter to express their dismay.
This may be because robots are fast becoming part of our every day lives, in factories, hospitals, offices and our homes. Many of them look human- or pet-like, with parts that resemble faces, arms and legs. Others are just programs on our phones. But regardless, we play with them, confide our deepest secrets to them, talk to them on the regular, and even bury them when they poop out.
How will the law treat robot abuse? Robots aren't considered people, though we do have a tendency to anthropomorphize them. Their legal protection is based on our ownership of them. But given the sentimental value we place on them, could they one day have the kinds of protections pets get with animal cruelty laws?
"Maybe these Canadians have a claim for more damages than just the physical robot because of the likelihood of sentimental attachment," Calo told me over email. "[But] maybe the authorities, in setting enforcement priorities, should be more alarmed that people are willing to destroy an anthropomorphic machine than deface some other object."
There's some research that suggests that humans who hurt animals are more likely to hurt other humans too. Will the same be true for robots? Kate Darling, a robot ethicist at MIT who studies robots and empathy, says yes.
"There's starting to be a strong parallel in terms of robots and animals and how we interact with them," she said. "People will develop relationships [with robots] like they would pets, [and] there might be reasons that we protect animals from being abused beyond the fact an animal feels pain. It's the type of human behavior we don't want to encourage. We worry that if people abuse animals, they'll be more likely to abuse humans."
Unlike with humans or pets, the abuse can be reversed. A group of techies in Philly has volunteered to help piece Hitchbot back together, Humpty Dumpty style. The thing with robots is that their "brains" are perfectly capable of outliving their bodies.
Right now, much of the artificial intelligence that's baked into the robots in our lives comes through an internet connection. It's stored in server farms distributed all over the world. Researchers in Europe and the U.S. are trying to build robots better distributed brains. The idea is that each droid learns from its own individual experience, and then that gets beamed up to a master brain that logs that information and disseminates it to each robot connected to it. Everybody benefits.
If these robo-brain projects pan out, robot cruelty could lead to an army of pissed off robots that share the experience of abuse inflicted on their brethren. What if the robots have also been coded to protect themselves?
"With such survival skills built in, the robot can then start behaving unexpectedly when it concludes that a certain human may pose a risk to the robot's survival. With the ability to upload its software to the cloud right before its demise, a next generation robot could build on the previous "bad" experience and start becoming aggressive towards humans," Bart Selman, a robotics expert at Cornell University, told me.
Unfortunately, there still seems to be a dearth of research into this topic. The Future of Life Institute, which recently doled out millions of dollars to fund research on how to prevent rogue robots from emerging doesn't have research specifically focused on human-on-robot violence.
"This may be an area that could use further attention," Selman, who has an FLI grant, added. "We don't want 'evolutionary' pressure on robots to evolve into robots that view humans as possible adversaries."
But, the technology that would enable that sort of collective experience is still very much in the future.
"I don't know if we have a good enough definition of pain in order to program it," said Darling, the MIT robot ethicist. "I think that [scenario] is a little bit too far from where the current technology is. That's more of a far-future question."
In the meantime, she says, the silver lining in all this might be how far Hitchbot made it. Because he looked humanoid, and not like an inanimate object like a blender, "people had more of a sense of responsibility toward it. That's what a lot of robotics are trying to create. They try to create that sense of innocence and dependency and give people the feeling they're nursing something. It's pretty cool people respond to that."
RIP, Hitchbot. You've taught us a lot.
Update 8/3/2015 7:35 p.m.: Video of the Hitchbot attack has reportedly surfaced.
Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.