Does AI threaten the future of Google Search?

Does AI threaten the future of Google Search?
Last year, Google Search and other web-based Google properties, which span many countries and languages, accounted for $149 billion in revenues. (Shutterstock)
Short Url
Updated 22 December 2022

Does AI threaten the future of Google Search?

Does AI threaten the future of Google Search?
  • Some experts believe emerging technology such as ChatGPT and Noor could challenge Google’s dominance
  • The latest AI bots certainly have the potential to revolutionize web searches but, for now at east, they have limitations

LONDON: Google Search is in peril, some people believe. The ubiquitous search engine, which has been the gateway to the internet for billions of people worldwide for the past two decades, faces “existential threats,” they say, that are forcing parent company Alphabet’s management to declare a “code red.”

“Google may be only a year or two away from total disruption,” Paul Buchheit, a Gmail developer wrote in a message posted on Twitter this month. “(Artificial Intelligence) will eliminate the search engine result page, which is where they make most of their money.”

Buchheit continued by predicting that AI could transform and replace the internet-search industry in much the same way the way Google effectively destroyed the formerly successful Yellow Pages model of printed telephone directories of businesses, which had thrived for many decades.

AI and chatbot services such as ChatGPT are already beginning to revolutionize the way people carry out research online by providing users with an unprecedented level of convenience and speed.

Unlike traditional search engines, which rely on keyword-matching to provide results, AI chatbots use advanced algorithms and artificial intelligence to understand the deeper intent behind a user’s query.

As a result, ChatGPT is capable of responding to more complex requests, building simple codes, working out difficult issues, and chatting in a relatively human-like manner. Contrast this with Google, which can only provides users with the links and tools they need to carry out detailed research themselves.

Because the results are shown in real time and more accurately reflect what is actually being asked, natural language processing services such as ChatGPT provide access to all the information users require, through a conversational AI interface, in a fraction of the time it would take them to manually search for it.

In other words, as many experts have been quick to point out, ChatGPT performs many similar tasks to Google — only better.

Google is one of several businesses, research facilities and experts who have contributed to the development of ChatGPT, which stands for Chat Generative Pre-trained Transformer. It is a groundbreaking collaborative project spearheaded by a research lab called OpenAI, which is also behind DALL-E, an AI-powered system that generates images from natural language descriptions provided by a user.

Although Google’s own search engine already exploits the power of AI in an effort to enhance the service it provides and deliver more relevant results to users, some experts believe the tech giant might struggle to compete with the newer, smaller companies developing these AI chatbots, because of the many ways the technology could hurt its existing business model.

In April, the Technology Innovation Institute, a cutting-edge research hub in Abu Dhabi, unveiled a service similar to ChatGPT, called Noor. The biggest Arabic-language natural language processing model to date, it is intended to provide the Arab region with a competitive edge in the field, given that technologies such as chatbots, market intelligence, and machine translation traditionally have tended to significantly favor English- and Chinese-language markets.

Last year, Google Search and other web-based Google properties, which span many countries and languages, accounted for $149 billion in revenues. The disruptive power of services such as ChatGPT and Noor therefore could represent a significant blow to Google’s parent company Alphabet and its business model.

“The potential for something like OpenAI’s ChatGPT to eventually supplant a search engine like Google isn’t a new idea but this delivery of OpenAI’s underlying technology is the closest approximation yet to how that would actually work in a fully fleshed out system, and it should have Google scared,” TechCrunch US managing editor Darrell Etherington wrote this month.

However, it is still early days and, as Jacob Carpenter points out, “the idea of upstart AI firms supplanting Google feels premature” given Alphabet can call on its significant resources to help see off any potential competition.

ChatGPT, described as the most advanced AI chatbot in the market, is available in several regions and supports a variety of languages, including Arabic. However, despite the enormous advances it undoubtedly represents, limitations remain.

In its current form, ChatGPT is unable to access the internet or other external sources of information, which means it cannot respond to or provide geo-based recommendations.

Moreover, the training data for its model only goes up to 2021, so the program often offers incorrect or biased answers, which means the service, at least for now, is not a reliable source of information.

Although the buzz generated by ChatGPT and Noor is likely to attract users and investors, which will help the technology to further develop, significant skepticism remains as to whether such AI chatbots will ever be able to do to Google Search what Google Search did to Yellow Pages.

For all the lofty claims from some experts about the potential of advanced-language models — and although it is important to recognize that they do offer distinct advantages, enhanced abilities and a different user experience to existing Google services that has the potential to revolutionize the way we search for things on the web — it is also important to be aware that even the developers of ChatGPT have said the technology is “not a direct competitor to Google Search and is not likely to replace it.”


2018 Musk tweet unlawfully threatened workers’ union organizing efforts, court rules

2018 Musk tweet unlawfully threatened workers’ union organizing efforts, court rules
Updated 01 April 2023

2018 Musk tweet unlawfully threatened workers’ union organizing efforts, court rules

2018 Musk tweet unlawfully threatened workers’ union organizing efforts, court rules
  • Also upheld was the board’s order that Tesla reinstate and provide back pay to an employee who was fired for union-organizing activity

NEW ORLEANS: A 2018 Twitter post by Tesla CEO Elon Musk unlawfully threatened Tesla employees with the loss of stock options if they decided to be represented by a union, a federal appeals court ruled Friday.
The ruling by a three-judge panel of the 5th US Circuit Court of Appeals upheld a March 2021 order by the National Labor Relations Board, which ordered that the tweet be deleted. The case arose from United Auto Workers’ organizing efforts at a Tesla facility in Fremont, California.
Also upheld was the board’s order that Tesla reinstate and provide back pay to an employee who was fired for union-organizing activity.
Musk tweeted on May 20, 2018: “Nothing stopping Tesla team at our car plant from voting union. Could do so tmrw if they wanted. But why pay union dues and give up stock options for nothing? Our safety record is 2X better than when plant was UAW & everybody already gets health care.”
The ruling said that “because stock options are part of Tesla’s employees’ compensation, and nothing in the tweet suggested that Tesla would be forced to end stock options or that the UAW would be the cause of giving up stock options, substantial evidence supports the NLRB’s conclusion that the tweet is as an implied threat to end stock options as retaliation for unionization.”
The UAW, and Richard Ortiz, the worker whose reinstatement was ordered, praised the ruling. “I look forward to returning to work at Tesla and working with my co-workers to finish the job of forming a Union,” Ortiz said in a UAW email.
“This a great victory for workers who have the courage to stand up and organize in a system that is currently stacked heavily in favor of employers like Tesla who have no qualms about violating the law,” said UAW Region 6 Director Mike Miller.
Tesla had not responded to emailed requests for comment Friday afternoon.


‘Only journalism can save journalism,’ FII Priority panel told

Faisal Abbas, editor-in-chief of Arab News, talks to Justin Smith, co-founder and CEO of global news platform Semafor at the FII
Faisal Abbas, editor-in-chief of Arab News, talks to Justin Smith, co-founder and CEO of global news platform Semafor at the FII
Updated 01 April 2023

‘Only journalism can save journalism,’ FII Priority panel told

Faisal Abbas, editor-in-chief of Arab News, talks to Justin Smith, co-founder and CEO of global news platform Semafor at the FII
  • News subscriptions grew by nearly 58 percent between 2019 and 2020
  • Social media platforms have grown up and are now being held responsible by governments and regulators, panel discusses

MIAMI: Trust in news has fallen in almost half of 46 countries surveyed by the Reuters Institute. Other studies show that only 8 percent of people in the US trust what they read, see and hear.

These numbers do not surprise Faisal Abbas, the editor-in-chief of Arab News. Speaking at the FII Priority conference, he said: “We’re living in an era where we are bombarded by information left, right and center so for people to distrust the information that they are receiving is not unusual.”

Abbas said there was a silver lining in this situation, which is the increase in subscriptions.

“People, for the first time since the expansion of the internet, are willing to actually pay money for professional, quality journalism.”

There was a median increase of nearly 58 percent in active subscribers between 2019 and 2020, according to data from analytics firm Piano.

Given anyone now has the “ability to disseminate and receive information unfiltered instantly,” professional quality journalism is more important than ever.

Faisal Abbas, editor-in-chief of Arab News, and Justin Smith, co-founder and CEO of global news platform Semafor discuss falling trust in news media

Justin Smith, former CEO of Bloomberg Media and current CEO and co-founder of global news platform, Semafor, who moderated the panel, said that Semafor believes the best way of “attacking trust is to rethink the actual format”.

Semafor’s articles are therefore broken down into sections featuring the news, analysis, different perspectives on the topic, and other articles on the topic.

In the Middle East, unlike America, there isn’t a first amendment that protects the free speech of the press and people. However, and particularly in Saudi Arabia, “we’re living in a positive climate of reforms,” said Abbas.

While he acknowledged “we’re not there yet,” he added that since “the whole vision is focused on setting targets, KPIs, and transparency for government officials and bodies, it is unthinkable that we will not get there in the end.”

Arab News itself has seen 500 percent growth in traffic and audience and a large rise in newsletter subscriptions, so “we must be doing something right,” he added.

Moreover, when it comes to quality journalism, Saudi media continues to dominate the media scene, Abbas said.

This is, in part at least, due to the diligence and responsibility of media outlets to maintain editorial integrity, which “can’t be promised or pledged, it has to be proven with every story.

“Reputation arrives on foot and leaves on horseback, so it only takes one mistake. It’s not a responsibility we take lightly and neither does our management.”

Abbas pressed on how social media is impacting the truth and discussed the recent pressure social media companies are facing from governments and regulators.

He likened social media platforms to someone entering their teenage years — “whatever they did before was cute,” or “they were too young to know what they’re doing,” Abbas said.

But now, regulation is catching up and platforms are being held accountable, and treated as publishers who are liable for the content on their sites, he added.

Beyond news dissemination, Abbas drew attention to the problem of commercialization.

“We’re a victim of a situation whereby you are penalized to do professional journalism, and rewarded if you do lazy fake news.”

News media organizations incur multiple costs from commissioning a story, to legal reviews, to copyediting. They then end up sharing revenue with social platforms or Google, which is unfair, Abbas said.

Often, fake news stories go viral on social media platforms garnering millions of clicks, “and that is just a classic model of how easy it is and how social media will reward you if you are publishing fake news.”

It is important to remember that “big tech companies weren’t founded by journalists or publishers; they were founded by engineers who didn’t quite understand the impact that fake news has,” Abbas said.

Ultimately, he concluded, “it’s up to us, only journalism can save journalism.”


Nobel-winning Russian editor: “I know Gershkovich, he’s no spy“

Nobel-winning Russian editor: “I know Gershkovich, he’s no spy“
Updated 31 March 2023

Nobel-winning Russian editor: “I know Gershkovich, he’s no spy“

Nobel-winning Russian editor: “I know Gershkovich, he’s no spy“
  • Dmitry Muratov told Reuters the case against Gershkovich was part of a wider trend to make journalism a "dangerous profession" in Russia
  • More than 260 publications have been closed, blocked or de-registered since then, he said

MOSCOW: A Nobel prize-winning Russian journalist said on Friday he did not believe that arrested American reporter Evan Gershkovich was a spy, and that he hoped diplomacy could bring about his quick release.
Dmitry Muratov told Reuters the case against Gershkovich — a Wall Street Journal reporter facing espionage charges that carry up to 20 years in jail — was part of a wider trend to make journalism a “dangerous profession” in Russia.
“I know Gershkovich. I’ve met him two or three times over the last year. I know the practice exists of using journalists as spies, intelligence officers and ‘illegals’ (undeclared spies) — this is not that kind of case,” Muratov said.
“He was no kind of so-called deep-cover operative — using being a journalist and his journalist’s accreditation as a cover for espionage ... Gershkovich was not a spy,” said Muratov, a co-winner of the Nobel Peace Prize in 2021 for his efforts to defend press freedom in Russia.
He was speaking outside a closed court hearing in Moscow on Friday in the case of Vladimir Kara-Murza, an opposition politician facing charges including state treason and spreading false information about the armed forces.
Muratov also cited the case of Ivan Safronov, a former journalist sentenced to 22 years in jail for treason last year.
“At every turn, we’re being charged with espionage and treason. It’s a trend — to show that journalism is a dangerous profession ... both for Russian and other journalists.”
Muratov was editor-in-chief of the independent newspaper Novaya Gazeta, which has seen several of its reporters killed in the last two decades, and had its registration revoked last year after Russia went to war in Ukraine. More than 260 publications have been closed, blocked or de-registered since then, he said.
“I don’t really understand how, given that trend and the lack of media competition, you can hold the elections that President Vladimir Putin announced for 2024,” he said.
“Does it mean they’ll go ahead without difficult topics, discussions, candidate programs? I’m starting not to understand how that can work.”
Muratov said he was aware of the “popular theory” that Gershkovich had been seized as a bargaining chip for Moscow to use in a prisoner exchange with the United States, though he did not say if he believed that himself.
He said he very much hoped that “through back-channel diplomacy,” Gershkovich would soon be freed.


Meta rolls out long-sought tools to separate ads from harmful content

Meta rolls out long-sought tools to separate ads from harmful content
Updated 31 March 2023

Meta rolls out long-sought tools to separate ads from harmful content

Meta rolls out long-sought tools to separate ads from harmful content
  • System offers advertisers three risk levels they can select for their ad placements

LONDON: Meta Platforms Inc. said on Thursday it is now rolling out a long-promised system for advertisers to determine where their ads are shown, responding to their demands to distance their marketing from controversial posts on Facebook and Instagram.
The system offers advertisers three risk levels they can select for their ad placements, with the most conservative option excluding placements above or below posts with sensitive content like weapons depictions, sexual innuendo and political debates.
Meta also will provide a report via advertising measurement firm Zefr showing Facebook advertisers the precise content that appeared near their ads and how it was categorized.
Marketers have long advocated for greater control over where their ads appear online, complaining that big social media companies do too little to prevent ads from showing alongside hate speech, fake news and other offensive content.
The issue came to a head in July 2020, when thousands of brands joined a boycott of Facebook amid anti-racism protests in the United States.
Under a deal brokered several months later, the company, now called Meta, agreed to develop tools to “better manage advertising adjacency,” among other concessions.
Samantha Stetson, Meta’s vice president for Client Council and Industry Trade Relations, said she expected Meta to introduce more granular controls over time so advertisers could specify their preferences around different social issues.
Stetson also said early tests showed no significant change in performance or price for ads placed using more restrictive settings, adding that those involved in the tests were “pleasantly surprised.”
However, she cautioned that the pricing dynamic could change, given the auction-based nature of Meta’s ads system and the reduction in inventory associated with any restrictions.
The controls will be available initially in English- and Spanish-speaking markets, with plans to expand them to other regions — and to the company’s Reels, Stories and video ad formats — later this year.


Italy data protection agency opens ChatGPT probe on privacy concerns

Italy data protection agency opens ChatGPT probe on privacy concerns
Updated 31 March 2023

Italy data protection agency opens ChatGPT probe on privacy concerns

Italy data protection agency opens ChatGPT probe on privacy concerns
  • ChatGPT is accused of failing to verify user age

MILAN: Italy's data protection agency said on Friday it had opened a probe into OpenAI's ChatGPT chatbot over a suspected breach of data collection rules.
The agency also accused ChatGPT of failing to check the age of its users, which should be reserved to people aged 13 and above.
It said it had provisionally restricted chatbot's use of Italian users' personal data.