https://arab.news/6a7pf
In Techville, that imagined city where algorithms hum beneath the surface of daily life, a quiet question is beginning to echo louder than the machines themselves: Who is AI truly serving? It is a question no longer confined to philosophers or engineers. It is now the concern of governments, boardrooms, and societies navigating a world where artificial intelligence is not merely a tool, but an actor shaping decisions, opportunities, and, increasingly, human destiny.
Against this backdrop, the arrival of Matthieu Courtecuisse in Riyadh offers a timely opportunity to reflect on the ethical architecture required to ensure that innovation does not outpace responsibility. As the founder of Sia, a firm at the intersection of strategy and AI, Courtecuisse brings a perspective shaped by both technological ambition and global advisory experience.
In this conversation, we explore the urgent need for ethical governance, not as an abstract ideal, but as a concrete framework to safeguard human dignity in an age of intelligent machines.
Q: In simple terms, what does ethical governance of AI mean today — and where do you draw the line that must never be crossed to protect human dignity?
A: Ethical governance should focus less on legal frameworks and more on responsibility and accountability, with a clear “human-in-the-loop” where needed. The key is ensuring that when something goes wrong, responsibility is clearly assigned. In practice, organizations must be able to identify and hold accountable those in charge. This responsibility-driven approach is more effective than heavy legal structures, especially in a fast-moving global race. However, human oversight and respect for core human values must always be maintained.
Q: As AI increasingly shapes decisions in government and business, what concrete safeguards should firms like Sia put in place to ensure accountability and transparency are not just promises, but realities?
A: Firms must ensure full traceability and thorough documentation of how AI systems are designed and implemented. Clear structures linking responsibility and accountability are essential, even though gaps can exist. Operating across multiple countries requires formalized processes to align with diverse regulations. While rapid technological change creates challenges, organizations must remain aware of their limits and continuously adapt their governance frameworks.
Q: Should the world move toward a global ranking of governments based on their ethical use of AI, across both public services and private-sector oversight, and what would be the key indicators of such a benchmark?
A: A global ranking is difficult and likely unrealistic due to the multi-factor nature of AI use and varying levels of national development. Different domains — such as military, biology, or finance — require distinct considerations, and countries are not equally advanced. Industry-specific benchmarks may be more practical, potentially adapted at the national level. Key indicators could include how sensitive data is used (e.g., political views, health, social data), but local governance must retain flexibility. Global coordination may be better achieved through conventions rather than rankings. Ethics should be judged by how fast harm is corrected, not how many principles are published.
Q: The recent case involving Anthropic and its engagement with the US Department of War has reignited debate on AI in defense. Where should ethical red lines be drawn when AI intersects with military decision-making?
A: Ethical red lines are essential, especially regarding human oversight and the distinction between military use and civilian surveillance. The debate highlights tensions between national security demands and ethical constraints, particularly around protecting citizens’ data. It also reflects differing governance models and investor influences across AI companies. These discussions are necessary to balance competing interests and define acceptable uses of AI in defense.
Q: In your advisory work, where do you most often see ethical blind spots in AI adoption — and are these failures of technology, governance, or leadership?
A: Blind spots arise from a combination of technology, governance, and leadership, but leadership is often the key factor. AI adoption is less about process and more about mindset and ecosystem management. A major challenge is upskilling and avoiding generational divides in the workforce. Younger generations adapt quickly, while leadership teams may lag if they are not actively engaging with the technology. Closing this gap is critical for responsible and effective AI adoption.
Q: As global power becomes more multipolar, what role should regions like the GCC play in shaping an international ethical framework for AI that genuinely safeguards human dignity?
A: Regions like the GCC must actively participate in the development and value creation of AI technologies to influence ethical standards. Without this involvement, they risk being sidelined, as standards are often set by leading technology providers for their home markets. Strategic investments — such as in compute infrastructure — can strengthen their position and enable them to shape global frameworks while preserving local decision-making authority.
In Techville, the lights are still on, the systems still learning. But the deeper question remains unresolved. Technology, for all its promise, cannot define the human person. It can only reflect the values of those who design, deploy, and govern it.
The challenge before us is not simply to build smarter machines — but to ensure that, in doing so, we do not forget what it means to be human. And perhaps, in places like Techville, where ambition meets reflection, the foundations of that balance are already being laid.
- Rafael Hernandez de Santiago, viscount of Espes, is a Spanish national residing in Saudi Arabia and working at the Gulf Research Center.