Several industry leaders and academics issued a statement warning that there is a need for rigorous mitigation of Artificial Intelligence (AI) associated risks.
The statement in brief
An open letter was posted by Center for AI Safety which is a San Francisco-based non-profit.
The statement, trimmed short, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
What are the fears regarding AI?
Debate over AI has to do with hypothetical scenarios in which its capacities quickly develop and its safety in functioning can no longer be guaranteed.
Many experts point to swift improvements in systems like large language models as evidence of future projected gains in intelligence.
Experts believe that once the tech reaches a level of sophistication its actions may get impossible to control.
Can AI do all that?
Others doubt such scenarios and point to the tech’s inability to perform even mundane tasks such as driving a car.
Despite years of effort and billions of investment fully self-driving cars are still far from reality.
However, both naysayers and those pro-AI agree upon the number of threats it can pose.
Such as from their use enabling mass-surveillance, to powering faulty “predictive policing” algorithms, and easing the creation of misinformation and disinformation, AI clearly needs checks and balances.
Powerful figures among signees
The open letter was signed by figures including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Yoshua Bengio — two of the three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize of computing”) for their work on AI.
A growing number of lawmakers, advocacy groups and tech insiders have not been passive regarding the noise surrounding AI.
Threats and risks to be addressed
They have raised alarms about the potential for new AI-powered chatbots to spread misinformation and displace jobs.
“As stated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization.
These are all important risks that need to be addressed,” CAIS director Dan Hendrycks said.
He further said, “We didn’t want to push for a very large menu of 30 potential interventions. When that happens, it dilutes the message.”
The statement came on the back of an open letter signed earlier this year which called for a six-month “pause” in AI development but there was no consensus on any remedy.