In the age of man versus machine, regulation is slow to arrive
The phone text message last week celebrated its 30th anniversary, while the old fax machine has been discontinued and put on the shelves of history. A few years ago, a computer defeated the world chess champion, opening the door wide for machine learning and artificial intelligence.
Every day, we hear about a new development or an application for AI or special algorithm tools, which have already complemented and quickened the pace of change in fields that touch every aspect of human life, including scientific research, security surveillance (often imperfect facial recognition tools) and the art of war (as seen in the current Ukraine conflict). AI could one day reduce or dramatically alter the way a labor force is used or cause unexpected harm, especially to the marginalized and vulnerable in society.
It is not a secret that AI has been developed and used by corporations and governments to solve large problems and increase efficiency — and they have been doing so, according to many experts, with little accountability or control. In recent times, terms like algorithm and machine learning have been interfering with, if not taking over, every sector and sphere of human activity under the broad heading of scientific advancement and the broad aims of cutting costs and streamlining production.
The monitoring of student protests at educational institutions, the deployment of intrusive surveillance systems to track refugees’ movements on the Turkish-Greek border, and efforts by gig workers in India to unite and take back control from the algorithm are just snippets of the worldwide concerns on this issue. As a result, efforts are being supported by entities such as the Pulitzer Center to empower a global community of journalists, media outlets and educators to deepen engagement with critical yet underreported issues, bridge divides, spur change and document shortfalls.
More work needs to be done to help researchers, journalists and others to examine governments and corporations’ use of predictive and surveillance technologies to guide decisions in policing, medicine, hiring, social welfare, criminal justice and more.
Overall, I am told that the machine cannot, as of now, develop itself to act alone, but it can do what we teach it. So, one hopes that those teaching it are keeping in mind some ethical values and are preventing the transfer to the algorithm of the twisted perceptions of reality that some people have. They must aim to prevent any harm while aiming for profit, protect the vulnerable, and decrease rather than increase the levels of discrimination and harassment in society.
Regulation is still lagging behind the billions of dollars in funding deployed by giant tech companies, rendering the state and organs of the law, or even watchdogs, mere automatic rubber stamps for tools and applications that have shaken up the landscape of doing business. Technology has already reorganized the means of production in society and often defies existing parameters, hence the outcry that the tech giants remain several steps ahead of any effort to regulate them, tax them or install safeguards to limit any forthcoming or lasting impacts.
Even if advances in AI do not produce what is known as artificial general intelligence — software that is capable of a human level of performance of any task — the technology is slowly altering humanity’s concept of reality. It is enough to notice how today’s young people consume the news without questioning the aggregation tools’ motives to realize the scale of the potential harm.
No doubt our world is progressing toward great achievements, but this leap forward that science is making should prompt moral, ideological and philosophical reflections.
AI could one day reduce or dramatically alter the way a labor force is used or cause unexpected harm.
The story of Deep Blue defeating the great chess grandmaster Garry Kasparov in 1997 demonstrated that the power of an algorithm is not limited to what is contained in the lines of code — and, the more refined those codes are, the more powerful an adversary a machine can be to man.
Understanding our own flaws and weaknesses, as well as those of the machine and the ones programming it, is key to remaining in control of our world and the tenets that have been in place for millennia. That is where work should be focused in the future: To study, debate and understand the consequences of the ever-widening use of algorithm-led tools in every aspect of human life.
Society has been witnessing the penetration of AI in various spheres and it is incumbent on us all, especially those involved in regulation, oversight and government, as well as journalists and civil society, to make sure it is free of harm, despite the age of AI and giant tech dominance being a regulation-free or simply a thinly regulated era.
- Mohamed Chebaro is a British-Lebanese journalist, media consultant and trainer with more than 25 years of experience covering war, terrorism, defense, current affairs and diplomacy.