Apple says it will fix software problems blamed for making iPhone 15 models too hot to handle

Apple says it will fix software problems blamed for making iPhone 15 models too hot to handle
The iPhone 15 phones are shown during an announcement of new products on the Apple campus in Cupertino, California, on Sept. 12, 2023. (AP)
Short Url
Updated 01 October 2023
Follow

Apple says it will fix software problems blamed for making iPhone 15 models too hot to handle

Apple says it will fix software problems blamed for making iPhone 15 models too hot to handle
  • Says it is working on an update to the iOS17 system that powers the iPhone 15 lineup to prevent the devices from becoming uncomfortably hot
  • Dismisses speculation that the overheating problem might be tied to a shift from its Lightning charging cable to the more widely used USB-C port

Apple is blaming a software bug and other issues tied to popular apps such as Instagram and Uber for causing its recently released iPhone 15 models to heat up and spark complaints about becoming too hot to handle.

The Cupertino, California, company said Saturday that it is working on an update to the iOS17 system that powers the iPhone 15 lineup to prevent the devices from becoming uncomfortably hot and is working with apps that are running in ways “causing them to overload the system.”
Instagram, owned by Meta Platforms, modified its social media app earlier this week to prevent it from heating up the device on the latest iPhone operating system.
Uber and other apps such as the video game Asphalt 9 are still in the process of rolling out their updates, Apple said. It didn’t specify a timeline for when its own software fix would be issued but said no safety issues should prevent iPhone 15 owners from using their devices while awaiting the update.
“We have identified a few conditions which can cause iPhone to run warmer than expected,” Apple in a short statement provided to The Associated Press after media reports detailed overheating complaints that are peppering online message boards.
The Wall Street Journal amplified the worries in a story citing the overheating problem in its own testing of the new iPhones, which went on sale a week ago.
It’s not unusual for new iPhones to get uncomfortably warm during the first few days of use or when they are being restored with backup information stored in the cloud — issues that Apple already flags for users. The devices also can get hot when using apps such as video games and augmented reality technology that require a lot of processing power, but the heating issues with the iPhone 15 models have gone beyond those typical situations.
In its acknowledgement, Apple stressed that the trouble isn’t related to the sleek titanium casing that houses the high-end iPhone 15 Pro and iPhone 15 Pro Max instead of the stainless steel used on older smartphones.
Apple also dismissed speculation that the overheating problem in the new models might be tied to a shift from its proprietary Lightning charging cable to the more widely used USB-C port that allowed it to comply with a mandate issued by European regulators.
Although Apple expressed confidence that the overheating issue can be quickly fixed with the upcoming software updates, the problem still could dampen sales of its marquee product at time when the company has faced three consecutive quarters of year-over-year declines in overall sales.
The downturn has affected iPhone sales, which fell by a combined 4 percent in the nine months covered by Apple’s past three fiscal quarters compared with a year earlier.
Apple is trying to pump up its sales in part by raising the starting price for its top-of-the-line iPhone 15 Pro Max to $1,200, an increase of $100, or 9 percent, from last year’s comparable model.
Investor worries about Apple’s uncharacteristic sales funk already have wiped out more than $300 billion in shareholder wealth since the company’s market value closed at $3 trillion for the first time in late June.


Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
Updated 11 September 2024
Follow

Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
  • Videos show Rapid Support Forces members glorifying destruction, torturing captives
  • Footage could provide evidence for future accountability, says expert

LONDON: Rebel fighters from the Sudanese Rapid Support Forces have posted videos on social media that document their involvement in war crimes, according to a recent report by UK-based newspaper The Guardian.

The footage, which has been verified by the independent non-profit organization Centre for Information Resilience shows fighters destroying properties, burning homes and torturing prisoners.

The films could serve as key evidence in potential war crime prosecutions by international courts.

Alexa Koenig, co-developer of the Berkeley Protocol, which sets stands for social media use in war crime investigations, told The Guardian: “It’s someone condemning themselves. It’s not the same as a guilty plea but in some ways, it is a big piece of the puzzle that war crimes investigators have to put together.”

The RSF has been locked in conflict with the Sudanese military since April 2023, bringing the country to the brink of collapse.

Some estimates suggest there have been up to 150,000 civilian casualties, with 12 million people displaced. This would make Sudan the country with the highest internal displacement rate in the world, according to the UN.

In Darfur’s El Geneina, more than 10,000 people — mostly Masalit — were killed in 2023 during intense fighting. Mass graves, allegedly dug by RSF fighters, were discovered by a UN investigation.

One video posted on X by a pro-RSF account showed a fighter in front of the Masalit sultan’s house declaring: “There are no more Masalit … Arabs only.”

Other footage features fighters walking through streets lined with bodies, which they call “roadblocks,” and scenes of captives being abused and mocked. Some even took selfies with their victims.

The videos offer rare glimpses into the atrocities happening in Sudan, a region largely inaccessible to journalists and NGOs.

In August, Human Rights Watch accused both sides in Sudan’s ongoing conflict of committing war crimes, including summary executions and torture, after analyzing similar social media content.


Australia considering banning children from using social media

Australia considering banning children from using social media
Updated 11 September 2024
Follow

Australia considering banning children from using social media

Australia considering banning children from using social media
  • Australia is the latest country to take action against these platforms
  • Experts voiced concerns ban could fuel underground online activity

LONDON: The Australian government announced Tuesday it is considering banning children from using social media, in a move aimed at protecting young people from harmful online content.

The legislation, expected to pass by the end of the year, has yet to determine the exact age limit, though Prime Minister Anthony Albanese suggested it could be between 14 and 16 years.

“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts,” Albanese told the Australian Broadcasting Corp.

“We want them to have real experiences with real people because we know that social media is causing social harm,” he added, calling the impact a “scourge.”

Several countries in the Asia-Pacific region, including Malaysia, Singapore, and Pakistan, have recently taken action against social media platforms, citing concerns over addictive behavior, bullying, gambling, and cybercrime.

Introducing this legislation has been a key priority for the current Australian government. Albanese highlighted the need for a reliable age verification system before a final decision is made.

The proposal has sparked debate, with digital rights advocates warning that such restrictions might push younger users toward more dangerous, hidden online activity.

Experts voiced concerns during a Parliamentary hearing that the ban could inadvertently harm children by encouraging them to conceal their internet usage.

Meta, the parent company of Facebook and Instagram, which currently enforces a self-imposed minimum age of 13, said it aims to empower young people to benefit from its platforms while providing parents with the necessary tools to support them, rather than “just cutting off access.”


Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
Panelists at GAIN Summit discuss the transformative impact of AI on education. (Supplied)
Updated 10 September 2024
Follow

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
  • KAUST’s president speaks of ‘amazing young talents’ 

RIYADH: The rapid advancement in artificial intelligence requires a comprehensive reevaluation of traditional educational practices and methodologies and careful use of the technology, said panelists at the Global AI Summit, also known as GAIN, which opened in Riyadh on Tuesday.

During the session “Paper Overdue: Rethinking Schooling for Gen AI,” the panelists delved into the transformative impact of AI on education — from automated essay generation to personalized learning algorithms — and encouraged a rethink of the essence of teaching and learning, speaking of the necessity of an education system that seamlessly integrated with AI advancement.

Edward Byrne, president of King Abdullah University of Science and Technology, said the next decade would be interesting with advanced AI enterprises.

He added: “We now have a program to individualize assessment and, as a result, we have amazing young talents. AI will revolutionize the education system.”

Byrne, however, advised proceeding with caution, advocating the need for a “carefully designed AI system” while stressing the “careful use” of AI for “assessment.”

Alain Le Couedic, senior partner at venture firm Artificial Intelligence Quartermaster, echoed the sentiment, saying: “AI should be used carefully in learning and assessment. It’s good when fairly used to gain knowledge and skills.”

Whether at school or university, students were embracing AI, said David Yarowsky, professor of computer science at Johns Hopkins University.

He added: “So, careful use is important as it’s important to enhance skills and not just use AI to leave traditional methods and be less productive. It (AI) should ensure comprehensive evaluation and fair assessment.”

Manal Abdullah Alohali, dean of the College of Computer and Information Science at Princess Nourah bint Abdulrahman University, underlined that AI was a necessity and not a luxury. 

She said the university had recently introduced programs to leverage AI and was planning to launch a “massive AI program next year.”

She explained that the university encouraged its students to “use AI in an ethical way” and “critically examine themselves” while doing so.

In another session, titled “Elevating Spiritual Intelligence and Personal Well-being,” Deepak Chopra, founder of the Chopra Foundation and Chopra Global, explored how AI could revolutionize well-being and open new horizons for personal development.

He said AI had the potential to help create a more peaceful, just, sustainable, healthy, and joyful world as it could provide teachings from different schools of thought and stimulate ethical and moral values.

While AI could not duplicate human intelligence, it could vastly enhance personal and spiritual growth and intelligence through technologies such as augmented reality, virtual reality, and the metaverse, he added.

The GAIN Summit, which is organized by the Saudi Data and AI Authority, is taking place until Sept. 12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.

The summit is focusing on one of today’s most pressing global issues — AI technology — and aims to find solutions that maximize the potential of these transformative technologies for the benefit of humanity.


Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
Updated 10 September 2024
Follow

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
  • Semafor co-founder Ben Smith says he is ‘much more worried about Gen X and older people’ falling for misinformation than younger generations

RIYADH: Media experts are concerned that older generations are more susceptible to AI-generated deep fakes and misinformation than younger people, the audience at the Global AI Summit in Riyadh heard on Tuesday.

“I am so much more worried about Gen X (those born between 1965 and 1980) and older people,” Semafor co-founder and editor-in-chief Ben Smith said during a panel titled “AI and the Future of Media: Threats and Opportunities.”

He added: “I think that young people, for better and for worse, really have learned to be skeptical, and to immediately be skeptical, of anything they’re presented with — of images, of videos, of claims — and to try to figure out where they’re getting it.”

Smith was joined during the discussion, moderated by Arab News Editor-in-Chief Faisal Abbas, by the vice president and editor-in-chief of CNN Arabic, Caroline Faraj, and Anthony Nakache, the managing director of Google MENA.

Semafor co-founder and editor-in-chief Ben Smith.

They said that AI, as a tool, is too important not to be properly regulated. In particular they highlighted its potential for verification of facts and content creation in the media industry, but said educating people about its uses is crucial.

“We have always been looking at how we can build AI in a very safe and responsible way,” said Nakache, who added that Google is working with governments and agencies to figure out the best way to go about this.

The integration of AI into journalism requires full transparency, the panelists agreed. Faraj said the technology offers a multifunctional tool that can be used for several purposes, including data verification, transcription and translation. But to ensure a report contains the full and balanced truth, a journalist will still always be needed to confirm the facts using their professional judgment.

The panelists also agreed that AI would not take important jobs from humans in the industry, as it is designed to complete repetitive manual tasks, freeing up more of a journalist’s time to interact with people and their environment.

“Are you really going to use AI go to a war zone and to the front line to cover stories? Of course not,” said Faraj.

Vice president and editor-in-chief of CNN Arabic, Caroline Faraj.

Smith, who has written a book on news sites and viral content, warned about the unethical ways in which some media outlets knowingly use AI-generated content because they “get addicted” to the traffic such content can generate.

All of the panelists said that educating people is the key to finding the best way forward regarding the role of AI in the media. Nakache said Google has so far trained 20,000 journalists in the region to better equip them with knowledge of how to use digital tools, and funds organizations in the region making innovative use of technology.

“It is a collective effort and we are taking our responsibility,” he added.

Anthony Nakache, the managing director of Google MENA.

The panelists also highlighted some of the methods that can be used to combat confusion and prevent misinformation related to the use of AI, including the use of digital watermarks and programs that can analyze content and inform users if it was AI-generated.

Asked how traditional media organizations can best teach their audiences how to navigate the flood of deep fakes and misinformation, while still delivering the kind of content they want, Faraj said: “You listen to them. We listen to our audience and we hear exactly what they wanted to do and how we can enable them.

“We enable them and equip them with the knowledge. Sometimes we offer training, sometimes we offer listening; but listening is a must before taking any action.”


Governance and regulation of AI is crucial, experts say at Saudi-hosted summit

Governance and regulation of AI is crucial, experts say at Saudi-hosted summit
Updated 11 September 2024
Follow

Governance and regulation of AI is crucial, experts say at Saudi-hosted summit

Governance and regulation of AI is crucial, experts say at Saudi-hosted summit
  • Panelists discuss UN initiatives and recommendations to support ethical governance of AI

RIYADH: Governance is crucial for artificial intelligence, said South Africa’s minister of science, technology, and innovation, Blade Nzimande, on Tuesday at the third Global AI Summit in Riyadh.

In a panel titled “Global Approach to Advance Ethical Governance of AI,” Nzimande announced South Africa’s collaboration with international partners to ensure full implementation of UNESCO’s recommendations on the governance of AI.

UNESCO released its first-ever global standard on AI ethics, titled “Recommendation on the Ethics of AI” in 2021, and earlier this year, launched the Global AI Ethics and Governance Observatory, which is a platform for knowledge, expert insights, and good practices on the ethics and governance of AI.

Nzimande said that UNESCO’s recommendations, if implemented, would help “address the racial and gender biases, which are often embedded in AI systems; safeguard against AI applications, which violates human rights; and ensure that AI development does not contribute to climate degradation.”

He added: “We need to ensure that the governance of AI is truly inclusive, and not the self-claimed prerogative of a select few. UNESCO offers us this inclusive, globally representative platform, where the voices of all matter, and South Africa commits our resources to support the recommendation’s implementation, in Africa and elsewhere.”

Other panelists included Laurence Ndong, minister of information and communication technologies for Gabon; Mohammed Ali Al-Qaed, chief executive of the Information and eGovernment Authority for the Kingdom of Bahrain; Makara Khov, secretary of state at the Cambodian Ministry of Post and Telecommunications; Ali Al-Shidhani, undersecretary for communications and information technology for the Sultanate of Oman; German State Secretary for the Federal Ministry of Digital and Transport Stefan Schnorr; Miroslav Trajanovic, state secretary at the Serbian Ministry of Science, Technological Development and Innovation; and Aissatou Jeanne Ndiaye, Senegal’s director of information and communication technology.

During the session, each representative gave a run-down of their country’s commitment to ethical AI governance.

The rapid growth of AI has made its regulation a critical focus with the topic informing another panel, titled “Efforts in Shaping Global AI Governance from the Roadmap for Digital Cooperation to the Global Digital Compact.”

Panelists included Nighat Dad, executive director of the Digital Rights Foundation; Amandeep Singh Gill, the secretary-general’s envoy for technology at the UN; Lattifa Al-Abdulkarim, member of the Shura Council and the UN High-Level Advisory Body on AI; Nazneen Rajani, founder and CEO of Collinear AI; and Philip Thigo, Kenya’s special envoy on technology.

The panelists analyzed the “Interim Report: Governing AI for Humanity” by the UN secretary-general’s AI advisory body focusing on the role of the body in shaping global AI policy.

Rajani highlighted the issue of limited data availability for some countries or entities and the importance of data governance in line with UNESCO’s recommendation of member states developing data governance strategies.
“One way to bridge that gap is to think of data governance in a way where we can have a data trust; a marketplace of sharing anonymized, privacy preserving data,” she said.

The GAIN Summit, organized by the Saudi Data and AI Authority, is taking place from Sept. 10-12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.