Has Global Artificial Intelligence Development Become a Threat?

Has Global Artificial Intelligence Development Become a Threat?

The processing power of AI is expected to more than quadruple annually, which, according to some, could result in a self-replicating intellect whose effects are still unclear. A who's who of tech leaders demanded that artificial intelligence (AI) labs cease developing the most advanced AI systems for at least six months due to "profound hazards to society and mankind."


In an open letter with more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders specifically attacked San Francisco-based OpenAI Lab's recently announced GPT-4 algorithm and requested that the business suspend further development until governance measures are in place. Technology experts, CEOs, CFOs, doctoral students, psychologists, doctors, software developers and engineers, professors, and public school teachers from all around the world endorse that goal.


Since the natural language processing software suffered a data breach including user conversations and financial information last month, Italy has become the first Western country to forbid future development of ChatGPT due to privacy concerns. The well-known GPT-based chatbot known as ChatGPT was developed by OpenAI and funded by billions of dollars from Microsoft.


The General Data Protection Regulation was established by the European Union to protect personal data both inside and outside of the EU, and the Italian data protection regulator says it is also looking into whether OpenAI's chatbot has already broken any of its laws. The BBC reported that OpenAI had conformed with the new rule.


The Generative Pre-trained Transformer, or GPT, is widely expected to evolve into GPT-5, an artificial general intelligence (AGI), according to many in the technology sector. AGI is a type of AI that has the capacity to reason independently; at that point, the algorithm would keep learning at an exponential rate.


According to Epoch, a research team attempting to predict the development of transformative AI, a tendency began to emerge in AI training models around 2016 that were two to three orders of magnitude larger than earlier systems. This pattern has persisted. According to Jaime Sevilla, director of Epoch, there are currently no AI systems that are greater than GPT-4 in terms of training compute. But that may change later. The capacity of large-scale machine learning models for AI has increased by more than doubling per year.


The ability of AGI to learn any task or subject was hailed by Microsoft co-founder Bill Gates in his monthly blog as "the great ambition of the computing industry. A strong discussion about how to construct artificial general intelligence (AGI) and whether it can even be created at all is currently taking place in the computing industry, according to Gates. "


Despite having "vast amounts of financial and technological resources," OpenAI and DeepMind (supported by Google) are only a few organizations focusing on developing artificial general intelligence, according to Muddu Sudhakar, CEO of a generative AI company for businesses. They still have a long way to go before they reach AGI, he added.


Sudhakar stated that there are numerous activities that people naturally perform that AI systems are unable to, including common sense thinking, knowing what a fact is, and comprehending abstract notions (including justice, politics, and philosophy). For AGI, there will need to be numerous advances and breakthroughs. But it appears like this system would largely replace humans if this is accomplished.


There would need to be many guardrails to stop the AGI from seizing total power because this would undoubtedly be disruptive, according to Sudhakar. But, this is most likely in the far future right now. It falls more under the category of science fiction.


However, some people disagree.


Nearly every industry has seen advancements in AI technology and chatbot assistants, and this trend will continue. Knowledge workers and others can focus on more essential work since technology can increase efficiency and take over menial duties.


For instance, large language models (LLMs), the algorithms that drive chatbots, are able to sift through millions of alerts, online chats, and emails in addition to detecting phishing web pages and potentially harmful executables. With just a few simple user commands, chatbots powered by LLM may suggest programming code, generate articles, and create marketing strategies.


LLM-powered chatbots are natural language processors that, when given a question from a user, essentially predict the next words. As a result, if a user asked a chatbot to write a poem about a person lounging on a beach in Nantucket, the AI would just string together the best suggestions based on earlier programming training. Yet, LLMs have also been known to make well-publicized errors and have even been known to cause "hallucinations," in which the next-generation engines display strange results.


How much larger will the risk be when AI no longer need humans to instruct it and is capable of thinking by itself if AI based on LLMs with billions of customizable parameters might go awry? According to Avivah Litan, a vice president and senior analyst at Gartner Research, the answer is far bigger. Litan thinks that unchecked AI research laboratories are developing technology at a fast pace that could lead to uncontrollable AGI. She said that AI research facilities "without providing the right tools for people to keep track of what is happening, they rushed ahead. It's moving much more quickly than anyone anticipated, in my opinion "said she.


The worry right now is that organizations are receiving AI technology without the tools necessary for users to assess whether the technology is producing correct or false information. We're currently discussing how inventive the good guys are, but the bad guys are also capable of innovation, according to Litan.


As an illustration, Microsoft released Security Copilot; it is built on OpenAI's GPT-4 large language model. An AI chatbot is the instrument that cybersecurity specialists will use to more effectively identify attacks, respond to them, and better comprehend the broader threat picture. The issue, according to Litan, is that "you as a user have to go in and identify any faults it makes." "It must change. They ought to have some sort of rating system that indicates that this result has a 5% possibility of error and is therefore likely to be 95% accurate. And the likelihood of inaccuracy for this one is 10%. They aren't providing you with any information on the performance so you can determine whether you can trust it or not."


The introduction of an AGI-capable version of GPT-4 by its developer OpenAI in the not-too-distant future is a significant worry. It might be too late at that moment to control technology. The release of two models for each generative AI tool, one for producing solutions and the second for verifying the first's accuracy, is one potential remedy, according to Litan. She suggested that method as an useful way to determine whether a model is producing reliable content. "


In this specific instance, the request is for a temporary moratorium on the creation of more complex, unexpected models that compete with human intelligence rather than a blanket prohibition on AI development, the author added. It is clear that the tech industry's executives and others need to work together to implement safety controls and regulations given the staggering rate at which new, potent AI advancements and models are being generated.

Recommend