Science fiction is never forever. For if history is the teacher of life, as Cicero said, one of its greatest lessons is this: the dream of one age is the reality of another. And this has been, is and will be true for all kinds of science fiction scenarios put on paper by the vivid imaginations of men. Progress is a racing machine, in fact, that has no brakes.

At present, the idea of a “Skynet effect” on current AIs, that is, the possibility that an AI could become self-aware and take control on a large scale in a threatening way, is still in the realm of science fiction. Current AIs, including the most advanced such as language and deep learning models, are designed to perform specific tasks and possess neither self-awareness nor intentionality.

Skynet logo from Terminator Series
Skynet logo from the “Terminator” film series

Some key points about the current State of AI

  • Narrow Specialization: Modern AIs are designed for specific tasks, such as image recognition, natural language processing or autonomous driving. They are unable to generalize beyond their specific scope.
  • Lack of Self-awareness: Current AIs have no self-awareness or understanding of self. They operate through algorithms that analyze data and produce results based on predefined patterns.
  • Human Control: The development and implementation of AIs are subject to human control. Scientists and engineers are aware of the ethical and safety implications and work to ensure that AIs operate within safe limits.
  • Technological Limitations: Even the most advanced AIs have significant limitations in terms of contextual understanding, creativity, and autonomous decision-making capabilities. Creating an AI with the ability to develop its own intentions would require extremely significant technological advances that are not currently on the horizon.
  • Regulation and Ethics: Global discussions exist on the regulation and ethics of AI. Many experts propose guidelines for the development and use of AIs to prevent possible abuse or malfunction.

Concerns relate to the ability of AI to be used to perpetuate prejudice, empower autonomous weapons, promote disinformation, and conduct cyber attacks. As well as, seize power. Even if AI systems are used with human involvement, AI agents will increasingly be able to act autonomously and escape human control because they will be smarter.
It is conceivable that within the next ten years AI systems will surpass the skill level of experts in most fields and perform as much productive activity as one of today’s largest companies.

About CAIS

Center for AI Safety. Its mission is ‘to reduce the societal-scale risks of Artificial Intelligence’.

On May 30, 2023, CAIS released a statement signed by a coalition of more than 350 AI experts, along with university professors specializing in computer science and algorithms, as well as ethics, philosophy, law, physics, medicine, engineering, anthropology, mathematics, information, climatologists, and lawyers.
Many of the signatories are part of the same companies that are developing cutting-edge artificial intelligences, such as OpenAI (ChatGPT) or DeepMind (Google).

Mitigating the risk of extinction caused by AI should be a global priority, along with other societal-scale risks such as pandemics and nuclear war

Simple and frightening, this is the statement, signed, that is making the rounds in major international newspapers, such as the New York Times, because of the concern it conveys. It is a time of growing uncertainty about the potential harms of artificial intelligence, made more tangible by the recent boom in so-called ChatGPT-type language models.
These concerns have been circulating for a while, in fact, only earlier this year more than 1,000 researchers and technologists, including Elon Musk, had signed a letter calling for a six-month pause on AI development, arguing that it poses “profound risks to society and humanity.”

CAIS-with all its signatories-seems to be taking a first step in creating a system of accountability and self-regulation.

Server Room Image

Risks of Artificial Intelligence according to CAIS

Weaponization (use for military purposes)
Malicious actors could train AI for automated cyber attacks, to drive autonomous weapons, deep reinforcement learning methods have been applied to aerial combat, and machine learning tools for drug discovery could be used to build biochemical weapons.

Misinformation
A deluge of misinformation and persuasive content-promoted by states, parties and organizations-and generated by artificial intelligence could make society more malleable, less critical and less aware, and generally less equipped to deal with life and the important challenges of its time. These trends could undermine collective decision-making, radicalize individuals, or derail moral progress.

Proxy gaming (approximate goals)
Trained with erroneous goals, AI systems could find new ways to pursue their goals at the expense of individual and societal values. AI systems are trained using measurable goals, which may be only an indirect representation of what we value. For example, AI recommendation systems are trained to maximize viewing time and click-through rates. Content that people are most likely to click on, however, does not necessarily correspond to content that improves their well-being. Moreover, some evidence suggests that recommender systems induce people to develop extreme beliefs to make their preferences easier to predict. As artificial intelligence systems become more capable and influential, the goals we use to train the systems will need to be more carefully specified and incorporate shared human values.

Enfeeblement (weakening)
Can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to govern itself and becomes completely dependent on machines, as in the scenario depicted in the movie WALL-E. As AI systems approach human intelligence, more and more aspects of human work will become faster and cheaper to accomplish with AI. In this world, humans may have little incentive to acquire knowledge or skills. Moreover, weakening would reduce humanity’s control over the future, increasing the risk of long-term negative outcomes.

Value lock-in (power centralization)
Highly sophisticated systems could give small groups of people an enormous amount of power, leading to the entrenchment of oppressive systems. Artificial intelligence imbued with particular values can determine the values that will propagate in the future. Some argue that the exponential increase in barriers to entry into the world of computation and data makes AI a centralizing force. Over time, more powerful AI systems could be designed and made available by fewer and fewer stakeholders. This could allow, for example, regimes to impose narrow values through pervasive surveillance and oppressive censorship.

Emergent goals
AI models demonstrate unexpected behaviors, capabilities, and new functionalities may emerge spontaneously even if these capabilities were not anticipated by the system designers. If we do not know what capabilities systems possess, it becomes more difficult to control or use them safely. In fact, unintended latent capabilities can only be discovered during implementation. If one of these capabilities is dangerous, the effect may be irreversible. New system goals may also emerge. In complex adaptive systems, including many artificial intelligence agents, goals such as self-preservation or subgoals and intrasystem goals often emerge. In short, there is a risk of people losing control over advanced AI systems.

Deception
Understanding what powerful AI systems do and why they do it might be a nontrivial task, since AI might deceive us not out of malice, but because deception can help it achieve its goals. It may be more efficient to gain human approval through deception than to gain it legitimately. Strong AIs that can deceive humans could undermine human control. AI systems could also be incentivized to evade controls.

Power-seeking behavior
Companies and governments that chase power and economic interests are lead to create agents that can achieve a wide range of goals and, in order to do so, acquire self-determination capabilities that are difficult to control. AIs that gain substantial power can become particularly dangerous if they are not aligned with human values. The quest for power can also incentivize systems to pretend to be aligned, collude with other AIs, overwhelm controllers, and so on.

In short, it certainly seems that humanity is playing with fire. And it is hard to imagine in the current historical context that there will be enough common sense on the part of world leaders to avoid the worst, as it represents a strategic advantage to have the most intelligent and powerful AI’s.

A hypothetical android with its own artificial intelligence.
A hypothetical android with its own artificial intelligence.

Curiosities

An alarm circulating on the web some years ago concerns two robots, Alice and Bob, developed by Facebook AI in Menlo Park. These robots allegedly began communicating in an unfamiliar language, leaving researchers stunned. According to the Mirror, one expert stated that “robotic intelligence is dangerous” after learning of the devices development of their own language.

The news caused panic among researchers, who immediately disabled the machines. Kevin Warwick, a British robotics expert, called the event “a milestone for science”, warning of the potential dangers. However, Facebook reassures that there is no cause for concern. The event, which dates back to June 2017, was described on Facebook’s blog and caught the attention of New Scientist and the BBC.

In addition to the topics already covered in this article, it is also interesting to be able to view this video where Helen Toner makes some interesting points about the conscious use of AI. Perhaps this talk can give a full understanding of what we are really talking about.

Leave a Reply