At the request of Melbourne’s Deakin University, in 2016, the Canadian writer and journalist Cory Doctorow wrote an interesting story on how the real-world development of self-driving cars could go really, really wrong.
As Doctorow himself puts it: “The story, Car Wars, takes the form of a series of vignettes that illustrate the problem with designing cars to control their drivers, interspersed with survey questions to spur discussion of the wider issues of governments and manufacturers being able to control the operation of devices we own and depend on.” (actually, the survey questions don’t really help “spurring discussions”, as Deakin professor Gleb Beliakov provides his own, unequivocal and somewhat laconic answer to all of them – you can, however, view the survey results here)
In the story, the interaction between highly intelligent self-driving software, rules and exceptions forced into the car systems by all kinds of authorities, and a well-planned act of behavorial hacking, forces most of the city’s car into behaving like a herd of frightened buffaloes driven over the edge of a cliff. All, but one cleverly (although illegally) software-hacked car. But of course, if you had the right to hack your car, and if everyone did it, the situation could get even worse. Or could it?