ai as stabilizer of status quo

AI will not be revolutionary unless they adopt openess as a design principle.

2023-03-13

The rise of advances in Artificial Intelligence over the past few months is overwhelming. Clearly, the world and society are once again being subjected to a process in which technological progress is greater than society's ability to assimilate it.

The ethical and technological dilemmas that are being discussed are consequence of adopting the same private philosophies that also dominated the technological boom of the 90s. In the begginings of Silicon Valley, we didn't have the legal right to access nor audit the code that our computers and personal devices were running. Now, with the boom of AI as a Service, we are repeating the same mistake: allowing a wave of private solutions to flood the market and people's lives without giving society the basic tools and abilities to understand them and/or adapt to this change.

What will be the end result? A society deeply impacted by black boxes managed by private corporations. Nothing revolutionary about that. Without the ability to audit, without the ability to choose, and without the ability to understand the ramifications and consequences, society will find itself caught once again in the familiar paradigm of "I accept the terms and conditions because they are to long to read I have no other choice but to accept them."

In stark contrast to the current discourse that Artificial Intelligence represents a revolutionary technology, I venture to propose that it will be quite the opposite (in the strictest sense of the term revolutionary). Artificial Intelligence will not come to the world to radically change the way we live, but rather serve to reinforce the prevailing status quo of the last century. The reason for this is the fact that AI statistically averages what happened, remove outliers and guesstimate what's next: AI is a stabiliser of the status quo. Decisions systems than automate what should happen to "what worked in the past" without any critical analysis of what "worked" represent is not revolutionary. And right now, we do not have access to analyze it.

The possibility of generating a revolutionary tool hinges on the capacity to openly discuss the processes that underpin it. However, the ecosystem of major vendors of these solutions is anything but transparent. It is characterised by closed code, lack of transparency in processes, opacity in training data, and at times, an inability to explain the rationales behind its outcomes. While the use of these technologies is currently emergent, at the moment of deploying it into the public domain there must be a clear limit: an AI decision system that will have a direct impact on citizens should be implemented if, and only if, it is realistically possible to explain to the impacted people how they work.

AI will be revolutionary only if they embrace openess as a design principle.