Humans have always been fascinated by robots and artificial intelligence. I mean, there’s R2-D2, Wall-E, the Terminator, and our fellas the Mars rovers.
You would think with technology advancing so fast we’d get used to robots doing the rather mundane jobs like being a security guard. Very, very, very wrong.
My friend John sent me a link to an NPR article about Steve the Knightscope robot who works in Washington, D.C. Steve would patrol the grounds near the Georgetown waterfront, handing out citations or tickets, and making sure everything was okay.
Unfortunately, the robot met its demise when, according to Forbes, Steve missed a step and fell down into its rival: a water fountain. Forbes and NPR both agree it was not foul play. They disagree on the motive. Forbes claims it was sensory issues. NPR had much more fun, saying Steve began questioning its existence and saw the water fountain as a comforting way to end things.
I was much more entertained reading the NPR story and the heart strings really got pulled as Scott Simon, the writer, toyed with the idea of Steve, and robots in general, genuinely feel the void of true emotion as they saw humans experience love, happiness and purpose. That’s not to say the Forbes article, written by Kalev Leetaru, was not entertaining. It provided a realistic possible situation that could have happened to Steve (what is realistic these days?), before going into the question of robotic rights. Yes, you read that correctly: robots have rights.
A good point Leetaru brought up is the scenario of a security guard robot patrolling the streets filled with late-night spots and bars. The bars close and the drunkards fill the streets causing chaos. The security robot films the illegal acts such as vandalism as evidence, or tries to calm the drunkard down. Drunkard doesn’t like the robot and gives it a punch, or worse: destroys the thing. Can the robot’s buddies get together and ask for criminal charges against the drunkard? Would the manufacturer or company who hired the robot provide the necessary tools and resources to fix him (or her? it?) up? Do they get paid to be guards and punching bags? Would they have the right to fight back? What are their rights?? Or if one of their functions fail because the manufacturer was negligent, could they sue their creator? What if they go Frankenstein on their creators? What if they do process human emotion and feel it but feel the negative emotions and try to take over? Would they get the right to vote?
I know all this sounds kind of ridiculous, but I find comfort I am not the only one asking these questions. Considering we are surrounded by new pieces of smarter tech every day, academics, government officials and commercial groups are questioning the same thing. One has to wonder: are robots people too?
Read the full articles here: “The Sad Drowning Of Steve The Robot And The Future Of Robotic Rights” and “I Sink, Therefore I Am: This Robot Wasn’t Programmed For Existential Angst“