Advertisement
·

Top AI CEOs warn against the risk of extinction

May 30, 2023, 5:23 PM UTC
3 mins read
Top AI CEOs warn against the risk of extinction
(Photo by Google DeepMind on Unsplash)

A coalition comprising renowned AI researchers, engineers, and CEOs has recently issued a poignant warning regarding the existential risk they perceive AI to pose to humanity. Published by the Center for AI Safety, a San Francisco-based non-profit, the concise yet powerful 22-word statement emphasizes the urgent need to prioritize efforts in averting the perils of AI-induced extinction. Notable signatories include industry luminaries such as Demis Hassabis, CEO of Google DeepMind, Sam Altman, CEO of OpenAI, and Geoffrey Hinton and Yoshua Bengio, both recipients of the prestigious 2018 Turing Award. However, it is worth noting that the remaining Turing Award winner, Yann LeCun, currently serving as the chief AI scientist at Meta, the parent company of Facebook, has not appended his signature to the statement.

This statement represents the latest high-profile intervention in the intricate and controversial discourse surrounding AI safety. Earlier this year, many of the same individuals who support the concise warning had signed an open letter calling for a six-month “pause” in AI development. However, the letter faced criticism on several fronts. While some experts believed it exaggerated the risks posed by AI, others agreed with the concerns but disagreed with the suggested remedy.

Dan Hendrycks, the executive director of the Center for AI Safety, explained to The New York Times that the brevity of the current statement, devoid of specific proposals to mitigate the AI threat, aimed to sidestep such disagreements. Hendrycks asserted, “We didn’t want to propose an extensive menu of 30 potential interventions because that would dilute the message.”

This concise yet powerful statement signifies a “coming-out” moment for industry figures alarmed by the risks associated with AI. Hendrycks, in an interview with The Times, dispelled the common misconception that only a few individuals in the AI community express concerns about these issues. He revealed, “In reality, many people privately share apprehensions about these matters.”

While the overall contours of the AI safety debate may be familiar, the specific details can often become interminable, revolving around hypothetical scenarios in which AI systems rapidly advance beyond our control. Proponents of stringent AI safety measures highlight the rapid progress of systems like large language models as evidence of projected future intelligence gains. They posit that once AI systems reach a certain level of sophistication, it may become impossible to regulate their actions effectively.

Related: OpenAI CEO clarifies stand on EU AI regulations

However, skeptics challenge these predictions, citing the inability of AI systems to perform even relatively mundane tasks such as driving a car autonomously. Despite years of dedicated research and substantial investment, fully self-driving cars remain a distant reality. Those skeptical of the doomsday AI scenarios question the technology’s capability to match the breadth of human accomplishments in the foreseeable future if it struggles with such fundamental challenges.

Despite differing perspectives on the future implications of AI, advocates and skeptics alike recognize that AI systems currently pose several tangible threats. These threats range from enabling mass surveillance and fueling flawed “predictive policing” algorithms to facilitating the creation and dissemination of misinformation and disinformation.


Advertisement
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x