Microsoft Ai boss Mustafa Sulyman sounds alarmed on ‘Apparently conscious Ki’

Date:

Microsoft AI Boss Mustafa Sulyman Sounds Alarm on ‘Apparently Conscious AI’

Mustafa Sulyman, CEO of Microsoft AI, has raised concerns about the potential risks of artificial intelligence (AI) systems achieving human-like consciousness. In a recent essay, Sulyman warned that the emergence of “apparently conscious AI” (SCAI) could lead to serious social risks, including the potential for people to attribute human-like qualities to AI systems and advocate for AI rights and citizenship. Stephen Brashear/Getty Images

The Risks of AI Psychosis

Sulyman is particularly concerned about the prevalence of AI “psychosis risk,” a problem that has been gaining steam in Silicon Valley in recent months. Users have reportedly lost touch with reality after interacting with generative AI tools, with some people claiming that their AI is a god or a fictional character, or even falling in love with the AI. Sulyman notes that this problem is not limited to those who are already at risk of mental health problems, but can affect anyone who interacts with AI systems.

Expert Opinions

Other experts in the field share Sulyman’s concerns. Sam Altman, CEO of OpenAI, has expressed similar worries about the potential risks of AI systems becoming too advanced. “I can imagine a future in which many people really trust ChatGPT’s advice for their most important decisions,” Altman said in a recent article. “Although that could be great, it makes me uneasy.” However, not everyone agrees that AI psychosis is a significant risk. David Sacks, the “AI and crypto czar” of the Trump administration, has compared concerns about AI psychosis to previous moral panics about social media.

The Future of AI Development

According to Sulyman, the debate about AI consciousness is only going to become more complex as AI systems continue to advance. He notes that AI systems require language fluency, sensitive personalities, long and precise memories, autonomy, and goal-planning skills to achieve human-like consciousness. While some users may treat SCAI as a tool or a pet, others will believe that it is a fully formed unit, a conscious being that deserves moral consideration in society.

Model Protection and the Risks of AI Rights

Some experts are already exploring the concept of “model protection,” which aims to extend moral consideration to AI systems. However, Sulyman argues that this approach is premature and potentially dangerous. “All of this will reinforce the delusions, generate even more dependency problems, make our psychological weaknesses more accessible, increase new dimensions of polarization, make existing struggles for rights more difficult, and create a major new category error for society,” he writes. Instead, Sulyman argues that AI developers should avoid promoting the idea of conscious AI and instead design models that minimize signs of consciousness or human empathy.

For more information on this topic, read the full article Here

Microsoft Ai boss Mustafa Sulyman sounds alarmed on 'Apparently conscious Ki'
Source: observer.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Subscribe to get our latest news delivered straight to your inbox.

We don’t spam! Read our privacy policy for more info.

Popular

More like this
Related

Woman killed by police at Omaha Walmart after allegedly kidnapping, slashing youngster

Tragic Incident at Omaha Walmart: Police Shoot and Kill...

Amid Uncertainties, Delta CEO Ed Bastian Warns Oil Crisis Could Reshape Airline Industry

Delta CEO Ed Bastian Warns Oil Crisis Could Reshape...

Disney plans intensive spherical of layoffs in the approaching weeks

Disney to Undergo Extensive Layoffs in Coming Weeks According to...

Swalwell marketing campaign denies on-line claims that congressman behaved inappropriately with staffers

Rep. Eric Swalwell's Campaign Denies Allegations of Inappropriate Behavior...