Of course, this is not about Tesla, but about intelligent-mobile- autonomous-systems (IMAS) – also known as robots. The philosophical problem comes from the fact that the robot, in this case the automobile, says “Leave the driving to us” which was an advertising slogan for Greyhound Bus. If the robot takes over the driving, and this means the decision making, then who is responsible for accidents? This was not a problem for Greyhound since the driver and in some cases the company were held liable for their mistakes. But what about the AIs? Indeed, the question of accountability, responsibility, and liability for robots and other AIs has become a major topic in digital ethics and everybody is scrambling to establish guidelines and norms for “good,” “trustworthy,” and “accountable” AI. It is at once interesting and unsettling that the ethical norms and values that the AI moralists inevitably fall back on arise from a society and a culture that knew nothing of self-driving cars or of artificial intelligence. This was a society and culture that categorized the world into stones, plants, animals, and human beings, whereby the latter alone were considered active subjects who could and should be held responsible for what they do. All the rest were mere objects, or as the law puts it, things (res). But what about the Tesla? Is it a subject or an object, a potentially responsible social actor or a mere thing? Whenever we go looking for who did it, we automatically assume some human being is the perpetrator and if we find them, we can bring them to justice. Who do we look for when the robot “commits” a crime? How do you bring an algorithm to justice? And if we decide that the robot is to be held responsible, aren’t we letting the human creators all to easily off the hook? These were the questions that the EU Parliament recently had to deal with when they discussed giving robots a special status as “electronic personalities” with much the same rights as corporations who have a “legal personality.”