No need to demonize ChatGPT but AI regulation is a must

No need to demonize ChatGPT but AI regulation is a must

Developed by OpenAI, ChatGPT has the ability to understand and respond to natural language (File/AFP)
Short Url

In the world of technology, advancements in artificial intelligence are constantly pushing the boundaries of what is possible. One of the most exciting developments in this field is the emergence of large language models like ChatGPT, developed by OpenAI. This powerful tool has the ability to understand and respond to natural language, making it a valuable asset in a wide range of industries, including journalism. In this article, we will explore how ChatGPT is being used in the field of journalism and its potential to revolutionize the way we consume and produce news in Saudi Arabia.

This introduction was not written by me. It was generated by ChatGPT (Generative Pretrained Transformer). I asked the AI tool to write a paragraph on how using ChatGPT will affect journalism in Saudi Arabia and the answer showed up on my screen within seconds.

Welcome to the world of AI and the new ChatGPT mania being seen all over the planet, with millions using it daily since its creator, OpenAI, released it in November for a free trial. If you had not heard of ChatGPT before reading this article, it means you have not been reading the news or using social media or watching TV.

Experts believe we are at a watershed historical moment, where humans are facing a new technology that has the potential to either push humanity forward or threaten its existence as we know it. What we know for certain is that the future will be different. The current ChatGPT implementation of AI technology is so fast and impressive, it is breathtaking with its capacity. ChatGPT was trained by “reading” most of the internet’s historical information. According to the BBC, “300 billion words were fed into the system.”

ChatGPT is being viewed either as a great opportunity for the advancement of the human race or an existential threat to human work

Dr. Amal Mudallali

As the AI bot told us, ChatGPT is a language processing AI with endless possibilities. It has taken the world by a storm. Its website has been overwhelmed by millions of people trying it out, leading the company to limit the traffic and post messages asking people to try again later.

The current version of ChatGPT can do many things. It writes essays, articles, research papers, reports, poetry and can explain the most complex scientific subjects in simple, easy-to-understand language. It can even generate recipes. What takes a human being hours to write, the AI bot can produce in just a few seconds.

Like every new technology, ChatGPT is being viewed either as a great opportunity for the advancement of the human race or an existential threat to human work. There are a handful of areas that are already grappling with the arrival of this AI competitor.

ChatGPT’s potential impact can be felt especially keenly in education, where it is changing the face of learning and teaching. As soon as ChatGPT was released, the first alarm bells rang in universities and schools around the US, as students used it to generate essays and assignment reports for their classes. All students have to do is prompt the AI with a question and an articulate, well-organized and detailed response is generated within seconds. Who wants to study for a final exam anymore?

ALSO READ: Learning to lie: AI tools adept at creating disinformation

Universities and schools fear cheating and plagiarism and the New York and Seattle public school systems quickly banned it. Although universities and colleges fear a ban might be ineffective, while also raising questions over academic freedom, they are busy trying to contain ChatGPT’s potentially negative impact on education. From changing their modes of instruction to giving oral exams and handwritten tests to formulating nuanced questions that the bot does not understand and canceling take-home and open-book exams, education is going through a fundamental transformation as a result of AI. Students are finding a way around this and some companies are now marketing programs they claim can catch a text written by ChatGPT.

OpenAI is reported to have been “developing technology to also help people identify a text generated by ChatGPT.” The creator of GPTZero, a program that claims to quickly detect AI-generated text, told The New York Times that “6,000 teachers from leading universities signed up” to his program.

AI will definitely change education, the role of the teacher and the instruction method, but the question is whether teachers will survive the advent of AI or be eliminated by it.

Another ChatGPT victim is feared to be journalists and writers. A news site, CNET, used AI to publish dozens of articles as an experiment, raising the question of whether AI will “drive journalists out of their newsroom,” according to The Washington Post.

The articles written by ChatGPT were seen as “lucid” and not different from those written by humans, but the Post considered the whole experiment “a journalistic disaster” because of the numerous corrections the site had to send out. The AI made some “very dumb errors,” according to the report. Still, the experiment “sent anxiety through the news media for its seeming threat to journalists.”

It also raised concerns over plagiarism and a lack of “original content,” because the AI does not go into the field and ask questions.

The biggest fear, however, is in the medical field, where ChatGPT is threatening to upend the profession. It raises the question of whether machines could one day make diagnoses and medical decisions instead of human doctors.

According to Axios, ChatGPT “recently passed all three parts of the US Medical Licensing Examination” as part of a research experiment. While medical students spend months studying for this exam, ChatGPT “performed so well without having been trained on a biomedical dataset.”

Medical health companies are investing in AI and machine learning and they believe it is the future. For now, though, they know it will “augment medical work rather than replace it.” The only doctor who should be worried is Dr. Google, because Dr. ChatGPT is taking over.

Even literature, which demands creativity and imagination, is not immune to the AI threat. A California design manager used ChatGPT and Midjourney, another AI program, to write and illustrate a children’s book in only a weekend. This raised ethical and copyright protection issues and led online artists to protest AI-generated art.

However, ChatGPT does have its limitations and is not yet perfect. It makes mistakes, it gets “confused” when the prompts are very nuanced, and its information is not recent. But it is a polite bot. According to reports, it has built-in safeguards to prevent it producing dangerous content or answering inappropriate questions. It also has “self-imposed rules, like to always generate positive and friendly content,” as Springboard wrote.

The hype about the dangers of AI to human work is legitimate but overexaggerated. This fear rises with every new technology. Instead of demonizing it, AI — and ChatGPT in particular — could prove to be a great equalizer. It could help the have-nots in all fields, especially in science, where the developing world lacks the resources to do its own research and the required information and data to compete and progress. It could give people and countries a chance to catch up with the rest of the world. But we need regulations. Without regulations, it will tip the balance in favor of AI, with dire consequences.

  • Dr. Amal Mudallali is an American policy and international relations analyst.
Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point of view