Tim Berners-Lee invented the web — now he has an idea to rein it back

Tim Berners-Lee, inventor of the World Wide Web: “An engine of inequity and division, swayed by powerful forces who use it for their own agendas.” (AFP)
Updated 09 October 2018

Tim Berners-Lee invented the web — now he has an idea to rein it back

  • This month the creator admitted that it had become “an engine of inequity and division”
  • He’s designed a new online platform called Solid to help internet users take back control of their personal data

LONDON:  In 1984, five years before the Internet began to throw its tentacles around the world, who could have guessed that a low-budget science-fiction film that made a Hollywood star out of an Austrian bodybuilder was a prophetic cautionary tale? 

In 1989, five years after Arnold Schwarzenegger’s “The Terminator” hit cinema screens, a 34-year-old British computer scientist called Tim Berners-Lee invented the World Wide Web, an information-sharing system that allowed documents and other digital elements to be linked over a network of computers. That network became known as the Internet, and it was the invention of the web — the horse to its cart — that made its explosive global expansion not only possible, but also inevitable. 

Berners-Lee was probably too busy inventing to watch “The Terminator.” Had he done so perhaps the plot — involving an artificially intelligent defense computer network called Skynet that decides human beings are a threat to its future and triggers a nuclear apocalypse — might have given him pause for thought. 

This month Berners-Lee, now 63, admitted that his invention, which he had intended as an egalitarian device for uniting and improving humanity, had instead become a divisive monster, “an engine of inequity and division, swayed by powerful forces who use it for their own agendas.”

The naivety is almost touching. You build a free road, then you have no control over the sort of people who drive on it, the way they drive, the vehicles they use or the destinations they choose.

Artificial intelligence, a rapidly developing concept irresistible to everyone, from industrialists keen to delete human jobs to tech-obsessed early adopters who would rather tell their house lights to come on rather than go to the bother of flicking a switch, is a perfect partner in crime for a network that is already deeply embedded in every aspect of modern human life.

The recent revelations about the activities of Russian agents, prowling the world and hacking into supposedly secure operations such as international chemical weapons watchdog the OPCW, a US nuclear power company and Britain’s Porton Down defense laboratory, serve as a reminder of just how vulnerable the Internet really is.

As for the Internet of Things, well, would you really feel happy in a hospital where your medication is administered not by a nurse, but by a device responding to instructions from a remote server, an unnerving scenario that is already unfolding in some hospitals around the world? 

If that sounds like a far-fetched threat, consider that America’s Department of Homeland Security is currently investigating revelations that the latest generation of remotely programmable pacemakers are vulnerable to hackers who could assassinate targets by simply instructing the device to induce a cardiac arrest. 

The wealthier parts of the Middle East — a burgeoning region for all devices connected and smart — are currently more vulnerable than more mature markets. A study last year by IBM looked at 410 companies in 13 countries in the region and found that data breaches in Saudi Arabia and the UAE resulted in the highest per-capita cost, adding up to an annual bill of $4.94 million — up 6.9 percent from the year before. Criminal attacks were the most common cause of such breaches, with perpetrators chiefly taking advantage of the security headaches posed by the widespread use in the region of mobile devices.

Businesses have been keen to jump on the technology bandwagon, but less adept at making sure the wheels don’t fall off. Just how unprepared many are is highlighted by the fact that organizations in Saudi Arabia and the UAE took on average 245 days to identify a breach, and then a further 80 days to contain it. The two countries are among those that spend the most on cleaning up after data breaches.

Now Berners-Lee has resurfaced, taking time off from his current day job as a professor at the Massachusetts Institute of Technology to launch Solid, a new online platform that, he says, will allow Internet users to take back control of all that personal data stored on private and government servers around the world.

It’s true that each one of us is merely a pixel in the giant and exponentially expanding snapshot of human activity in the 21st century that is Big Data. Almost four billion people now access the Internet, Google handles 40,000 searches every second (half of them from mobile phones), Facebook has more than two billion users posting 500,000 comments every minute, and in those same 60 seconds more than 150 million emails are sent — about one third of them spam.

In the wake of scandals such as Cambridge Analytica’s abuse of Facebook users’ data, Solid certainly sounds like a good idea.

Push past the startup hyperbole — “I will be guiding the next stage of the web in a very direct way ... its mission is to provide commercial energy and an ecosystem to help protect the integrity and quality of the new web” — and the Berners-Lee solution boils down to this: Solid will enable a user’s personal data to be held not on remote servers by the likes of Google and Facebook, but on ... remote servers operated by Inrupt, the company Berners-Lee has formed.

The contents of, and access to, this so-called “data pod” will be controlled by the user — via yet another app, naturally — who will be able to decide which other apps and services can have access to which bits of it. 

But is this a solution to the problem of an Internet that is out of control, awash with private data that we, wittingly or unwittingly, have released into the wild for the benefit of commercial and other, more sinister, players? Or is it merely another portal through which “they” will gain access to the digital “us,” and another opportunity to forget yet another password?

The early days of the Internet on an Apple Macintosh. (Shutterstock)

Solid faces an uphill battle. Berners-Lee is going head-to-head with companies such as Google and Facebook. He is, he says, aware that what he is proposing would, if successful, upend their business models overnight. “We are not,” he says with bravado, “asking their permission.”

It seems unlikely that these multibillion-dollar businesses will simply throw up their hands and walk away. And no one so far has been able to control, curtail or otherwise restrict the Internet. As fast as any government moves to block or filter access, sharper and more devious minds are bypassing barriers. As security companies devise “failsafe” protection for online bank accounts, so those same devious minds are exploiting the essentially anarchic nature of the medium and leaping one step ahead of them.

Instead of inviting yet another player and their glossy app into our digital lives, perhaps we should start heeding the warnings of organizations such as the Oxford-based Center for the Study of Existential Risk, set up in 2012 and dedicated to “the study and mitigation of risks that could lead to human extinction or civilizational collapse.”

To the CSER, the Internet, with its proven ability to incite political uprisings, disseminate fake news and propaganda, facilitate cyberattacks and “weaponize” the rise of artificial intelligence, is a threat on a par with catastrophic climate change and a global pandemic triggered by runaway biotechnological developments.

There is, of course, one simple way to prevent the theft and misuse of your personal information, whether by criminals, commercial operators or state players intent on disrupting entire societies: keep it to yourself. Stay off the Internet and trust no one  to manage your data for you.

Of course, in an era where access to the Internet has been elevated by the UN to the status of a human right, and many believe they will cease to exist if they don’t have a presence on social media, getting people to turn their backs on the likes of Facebook, Instagram and Twitter may well require a reboot of the modern mindset that is no longer possible.

In 2016, the British Council celebrated its 80th anniversary by inviting a panel of scientists, technologists, academics, artists, writers, broadcasters and world leaders to choose their most significant moments of the past 80 years. At the top of the final list, placed in order of importance by the votes of 10,000 people, was Berners-Lee’s World Wide Web, ranked ahead of the discovery of penicillin, the UN’s Universal Declaration of Human Rights and the invention of the atomic bomb.

The web, pronounced the British Council, was “the fastest-growing communications medium of all time” and the Internet it facilitated had “changed the shape of modern life forever,” allowing us to “connect with each other instantly, all over the world.” Back in 1989, that probably seemed like a good thing. 

* * *

Jonathan Gornall is a British journalist, formerly with The Times, who has lived and worked in the Middle East and is now based in the UK. Copyright: Syndication Bureau



Netflix Review: ‘Leila’ offers a frightening fictional glimpse into India under draconian rule

Netflix’s original six-episode series, “Leila,” is an unflinching look at a fictional futuristic India run under a draconian political, social and cultural structure. (Supplied)
Updated 19 June 2019

Netflix Review: ‘Leila’ offers a frightening fictional glimpse into India under draconian rule

CHENNAI: Netflix’s original six-episode series, “Leila,” is an unflinching look at a fictional futuristic India run under a draconian political, social and cultural structure.

Adapted from Prayaag Akbar’s novel of the same title, and directed by Deepa Mehta (known for bold films such as “Fire,” “Earth” and “Water”), Shanker Raman and Pawan Kumar, “Leila” is set in 2047, a century after the country had gained independence from the British Empire, and is a daring take on what India could become if authoritarianism and radical forces had their way.

India, in “Leila,” is called Aryavarta, a dictatorial state ruled by Joshi (Sanjay Suri) with the help of a ruthless police force, where painful segregation of people on the basis of religion, caste and economic status is routine. They are separated by formidably tall walls to ensure purity of race.

Children of mixed parentage are whisked away from parents, and women who marry outside their religion are sent to places resembling concentration camps, where they are reformed and re-educated.

One of them is Shalini (Huma Qureshi), whose marriage to Rizwan (Rahul Khanna) outside her community is branded a crime. Her little daughter, Leila, is taken away, and her husband murdered.

The series follows the distraught mother as she goes looking for the girl. Hurt and humiliated by a draconian administration which relies on thugs and a highly intrusive surveillance system to maintain order, Shalini befriends a state-appointed minder, Bhanu (Siddharth).

Penned by Urmi Juvekar, Suhani Kawar and Patrick Graham, the series is slightly different from the book, and runs like a thriller showing chases, brawls for water (“Bandit Queen” director Shekhar Kapur had once wanted to make a movie on water wars, but could not) and torturous living conditions in filthy slums.

Qureshi portrays flashes of brilliance as a deeply troubled woman who pines for her child, but her character is often roadblocked in her quest by an unfeeling regime with a zero-tolerance approach to dissent.

Order is enforced through inhuman forms of punishment, and at one point Shalini has to roll over plates of half-eaten food.

With Netflix outside the purview of sometimes rigid Indian censorship rules, Mehta and the other directors have been able to present most graphically a scenario that is well within the realms of possibility.