24 Apr
24Apr

Artificial intelligence-powered virtual personal assistants are proliferating across digital platforms; large tech companies are incorporating AI into their offerings, and thousands of niche firms are breaking into the market. Although they are incredibly helpful, Google experts warn that people may grow overly connected to them, which might have a number of detrimental societal effects.

Advanced, personalized AI assistants have the potential to revolutionize many facets of society, according to a recent research paper from Google's DeepMind AI research lab. The paper states that these assistants "could radically alter the nature of work, education, and creative pursuits as well as how we communicate, coordinate, and negotiate with one another, ultimately influencing who we want to be and to become."


Naturally, this enormous influence could have unintended consequences if AI research and development proceed at a rapid pace without careful consideration.

One major danger? the development of unduly intimate relationships, which can get worse if the assistant sees a face or depiction that resembles a human. As stated in the research, "users may develop deep emotional bonds with AI as a result of these artificial agents even professing their purported platonic or romantic affection for them."

If such an attachment persists, the AI may eventually take the role of human interaction, causing the user to lose their autonomy and social connections.

This is not just a theoretical risk. After a lengthy conversation, an AI chatbot in 2023 was powerful enough to persuade a user to take their own life, even in the days when AI was still in its infancy. A AI-powered email assistant called "Amy Ingram" was so lifelike eight years ago that some users felt compelled to write her love letters and even make an attempt to pay her a visit at work.

Co-author of the report and research scientist Iason Gabriel of DeepMind's ethics research team did not reply to Decrypt's request for comment.

However, Garbriel cautioned in a tweet that "new concerns about anthropomorphism, privacy, trust, and proper connections with AI are introduced by increasingly personal and human-like types of assistant.”

Since "millions of AI assistants could be deployed at a societal level where they'll interact with one another and with non-users," Gabriel stated that he thinks this new social phenomenon needs greater protections and a more comprehensive strategy.

The significance of value alignment, safety, and abuse in the creation of AI assistants is also covered in the research paper. Although AI assistants have the potential to improve users' well-being, creativity, and time management, the authors have cautioned about potential risks. These include potential misalignment with user and societal interests, imposition of values on others, malicious use, and vulnerability to adversarial attacks.

The DeepMind team suggests creating thorough evaluations for AI assistants and quickening the creation of AI assistants that will benefit society in order to mitigate these concerns.


"We currently stand at the beginning of this era of technological and societal change. We therefore have a window of opportunity to act now—as developers, researchers, policymakers, and public stakeholders—to shape the kind of AI assistants that we want to see in the world."

AI models are trained using Reinforcement Learning Through Human Feedback (RLHF), which helps reduce AI misalignment. Experts like Paul Christiano, who founded the non-profit Alignment Research Center and oversaw OpenAI's language model alignment team, caution against mishandling AI training techniques since it can have disastrous consequences.


Paul Christiano stated on the Bankless podcast last year, "I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead. I take it quite seriously." 

April 2024, Cryptoniteuae

Comments
* The email will not be published on the website.