UAE prevents 34 cyberattacks against government, private sector in January

The attempts to break into cybersecurity platforms of the UAE government and private companies emanated from outside the country. (Reuters)
Updated 19 February 2018
0

UAE prevents 34 cyberattacks against government, private sector in January

DUBAI: The UAE managed to prevent 34 attempts by cyber criminals to attack government and private sector entities last month, the Telecommunications Regulations Authority (TRA) said on Monday.
This number was significantly lower than the 136 incidents reported during the same month last year, TRA added.
The attempts to break into cybersecurity platforms of the UAE government and private companies emanated from outside the country, the agency noted, almost half of which were related to fraudulent offensives, eight as data breaches, three aimed at defacing and blocking websites, while the rest was carried out for other purposes.
Cybersecurity is one of the major concerns among UAE executives and the wider region for 2018, according to the World Economic Forum’s Global Risk Report, after a series of attacks experienced last year that were also growing in ferocity and frequency.
Among the widely reported attacks that hit last year include WannaCry in May, ExPetr in June and BadRabbit in October. Last month, TRA said the malware Zyklon had the most number of reported attacks in the UAE.
The malware, which has been in use since 2016, is designed to launch distributed denial of service attacks, log keystrokes, steal passwords and mine cryptocurrency by exploiting several vulnerabilities in the Microsoft Office software suite.


Google chief trusts AI makers to regulate the technology

Updated 13 December 2018
0

Google chief trusts AI makers to regulate the technology

  • Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said
  • Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights

SAN FRANCISCO: Google chief Sundar Pichai said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself, in an interview published on Wednesday.
Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said in an interview with the Washington Post.
“I think tech has to realize it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”
The California-based Internet giant is a leader in the development of AI, competing in the smart software race with titans such as Amazon, Apple, Microsoft, IBM and Facebook.
Pichai said worries about harmful uses of AI are “very legitimate” but that the industry should be trusted to regulate its use.
“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.
“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”
Google in June published a set of internal AI principles, the first being that AI should be socially beneficial.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.
“As a leader in AI, we feel a deep responsibility to get this right.”
Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights.
The company noted that it would continue to work with the military or governments in areas such as cybersecurity, training, recruitment, health care, and search-and-rescue.
AI is already used to recognize people in photos, filter unwanted content from online platforms, and enable cars to drive themselves.
The increasing capabilities of AI have triggered debate about whether computers that could think for themselves would help cure the world’s ills or turn on humanity as has been depicted in science fiction works.