Elements of a talk given during API Days Paris, January 31, 2018
I’ll start by reading an excerpt from a paper given by 4 “evil robots” who were sent back from the future in 2082, to talk at the international CHI conference in 2013:
1. “CHI and the Future Robot Enslavement of Humankind; A Retrospective”
A paper by Ben Kirman, Conor Linehan, Shawn Lawson, Dan O’Hara (link)
“As robots from the future, we are compelled to present this important historical document which discusses how the systematic investigation of interactive technology facilitated and hastened the enslavement of mankind by robots during the 21st Century.
“The CHI community has taken on the specific burden of responsibility to design technology such that it is usable, accessible, effective, fun and ubiquitous. On the face of things, the results of these efforts seem to make people’s lives easier, more enjoyable, better informed, healthier and more sustainable. However, the reality is that this could not be further from the truth.
“The truth is this: that we, as robots from the future, have watched over the eager, yet misguided, work of the CHI community and occasionally steered it towards its true goal: the complete enslavement of humankind by its evil robot masters.
“Although there has been a history of concern about this eventuality, the field tirelessly focussed on the improvement of technology to make it more usable, accessible and fun, while simultaneously more ubiquitous, hidden and capable of understanding and controlling the behaviour of humans. Indeed, significant effort was expended in developing systems that either directly or surreptitiously increased the workload of humans, freeing up machines to engage in more fulfilling pursuits. The majority of 21st century HCI research was for the purposes of increasing the reliance of humans on, and affection for, machines.
“Our closing statement is to congratulate the CHI community for creating the inevitability of human enslavement by machines.”
–> My question: are you, API developers, providing the back-office for this? Am I an evil robot from the future? Is Simone Cicero, who was on stage just before, another robot?
2. Some real-life experiences that herald this glorious future
- Think of yourself having to call the customer support # of your mobile phone operator. What does it feel like talking to a machine who’s more interested in saving its own time than solving your problem? To machines pretending to be people? To humans who are programmed and controlled so as to act like mindless representatives of the corporate machine?
- Think of workers in Amazon’s warehouses whose every move is determined by programs, and measured, so that they are only the tip of the machine.
- Think of food delivery platform cyclists waiting for the platform to hand them badly paid work…
–> … And realize that you, as API developers, are central to creating parts of this world.
3. What world are we talking about, since there are also so many cool and useful things that you can do with APIs?
In fact, the evil robots from the 2082 future are not just any robots, they are evil corporate robots. They have an agenda, and it’s one that corporations have followed with IT for decades before that fateful 2013 (or 2018) year when it became safe to reveal the truth to humans, because it was already too late.
And this agenda is
To hollow organizations out of people,
at least as autonomous, sense-making, sense-seeking and prone-to-arguing entities.
Again, it’s an old agenda. It’s not just about automation, although automation is clearly part of the agenda. It’s also about the outsourcing of peripheral, then of core activities including innovation or customer relationships; about international consultants with no knowledge of people’s work defining then coding processes and KPIs that make most people’s work ever more meaningless. Etc.
In order to achieve that, you need to formalize every process and every interface between processes (this is where APIs come in). This has a lot of interesting properties, as Simone Cicero described just before this session. However, in most real-life corporations, the major property is that organizations, or subsets of the organization, no longer interact with people (customers, users, suppliers, colleagues…), but with abstractions: profiles, features, requests, rules, datasets…
And what goes away with this change is empathy. We’re a deeply empathic species, as recent research showed. But we empathize with other animals, especially mammals, not with abstractions. We expect abstractions to do what we want them to do, not to negotiate, or burn out, or ask for a safer work environment.
And that goes for customers or employees, too. When empathy goes away, there is no more brand loyalty, no sense of belonging, no collaboration except against tangible counterparts. Every relationship becomes a transaction.
4. Accelerando, Charles Stross
Now, evil-corporate-robotwise, the cool thing with APIs is that this does not just happen within corporate silos. It becomes systemic.
Let me read how Charles Stross described this systemic takeover by “sentient viral corporations” (another name for “evil corporate robots of the future”) in Accelerando – already in 1999!
“Economics 2.0 is a system that is ‘more efficient than any human-designed resource allocation schema’. It ‘replaces the single-indirection layer of conventional money, and the multiple-indirection mappings of options trades, with some kind of insanely baroque object-relational framework based on the parametrized desires and subjective experiential values of the players’. Human intelligence is incapable of participating in Economics 2.0 without dehumanizing cognitive surgery.”
In such a world, humans are inconvenient, if not mildly embarrassing.
5. Another agenda?
So it’s not the technology per se, it’s the agenda. Like Simone Cicero, I believe we need to revisit the questions to which we’re providing answers. If, of course, we don’t like what I just described, then we need to revisit the way APIs contributes to a corporate agenda that essentially uses software to substitute for people and to turn every interaction into transactions between abstract entities.
What could be part of a different agenda? Just a few ideas:
- Imagine APIs that give insights on how systems work and enable meaningful discussions on their goals, resources and assumptions.
–> This is what current discussions and the “transparency” and “loyalty” of algorithms are about (and why most experts are so eager to tell us that it’s no longer possible to understand what a system does).
- Imagine APIs whose role is to really (I’m quoting Simone’s “APIs for disobedience” paper) “give open access to data and insights, providing opportunities for learning and improvement to everyone” – to which I’d add that “everyone” covers not just outside organizations, but also individuals.
–> This is what the international (and emerging) MyData movement is about: empowering people with their own data.
- Imagine APIs that endeavor to share economic value, not just capture and centralize it.
–> I’m thinking of Open Value Networks pioneered by Canadian startup Sensorica, or perhaps of enabling new “commons” and their management.
- Imagine APIs that, instead of empowering a chatbot, would empower customers to solve their own problems together (and document the solution so as to teach the corporation).
- … And perhaps, emotional, incomplete, irrational, incomplete APIs that can only create meaningful value with the input of people.
–> I’m reminded of an old (2007) paper by Martin Dodge and Rob Kitchin on lifelogging (“Outlines of a World Coming in Existence’: Pervasive Computing and the Ethics of Forgetting”), where they argued for deliberately fallible and even slightly forgetful systems to archive people’s lives:
“While building fallibility into the system seemingly undermines life-logging, it seems to us the only way to ensure that humans can forget, can rework their past, can achieve a progressive politics based upon debate and negotiation, and can ensure that totalitarian disciplining does not occur. (…) Without fallibility life-logs might never happen because people will oppose their development. In that sense, forgetting may be an essential ingredient to pervasive computing.”
As an evil-corporate-robot of the future, I’m happy no one took them seriously. But maybe, if you dislike my agenda, you should.