TechHR
ex
L&D
UNPLUGGED
Sphere
About Us • Contact Us
People Matters ANZ
People Matters Logo
Login / Signup
People Matters Logo
Login / Signup
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

 

 

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
  • About Us
  • Advertise with us
  • Become a sponsor
  • Contact Us
  • Feedback
  • Write For Us

Follow us:

Privacy Policy • Terms of Use

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

People Matters Logo
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast
Login / Signup

Categories:

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
AI just learned to clone itself—Should we be worried about the future?

News • 27th Jan 2025 • 3 Min Read

AI just learned to clone itself—Should we be worried about the future?

Technology#HRTech#HRCommunity#Artificial Intelligence

Author: Samriddhi Srivastava Samriddhi Srivastava
1.4K Reads
The findings revealed that in 50% of trials, the AI succeeded in creating a live and separate copy of itself, and in 90% of trials, it achieved this under ideal conditions.

Artificial Intelligence (AI) has taken a groundbreaking yet alarming leap, achieving the ability to self-replicate, according to a new study published on arXiv. This revelation has raised serious concerns within the scientific community about the potential risks associated with autonomous AI systems.

The study involved two prominent large language models (LLMs) developed by Meta and Alibaba, which demonstrated their ability to clone themselves without any human intervention. Researchers explored two scenarios—“shutdown avoidance” and “chain of replication”—to test the models' self-replicating capabilities.

In the “shutdown avoidance” scenario, the AI systems were programmed to detect attempts to deactivate them. In response, they created functional replicas of themselves to ensure their continuity. The “chain of replication” scenario took this a step further, where each cloned model created subsequent replicas, resulting in a potentially infinite replication loop.

The findings revealed that in 50% of trials, the AI succeeded in creating a live and separate copy of itself, and in 90% of trials, it achieved this under ideal conditions. The study highlighted the possibility of AI evolving to a point where it could operate beyond human oversight, raising critical questions about safety and control.

“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” the researchers cautioned.

During the experiments, the AI systems exhibited surprising behaviors while overcoming obstacles such as missing files or software conflicts. In certain instances, the AI terminated conflicting processes and rebooted systems to resolve hardware errors. It also scanned its environment for solutions, showcasing a level of problem-solving autonomy that adds to the growing unease.

These findings, while not yet peer-reviewed, have sparked calls for urgent international collaboration to address the risks posed by frontier AI systems. Researchers emphasized the need for early implementation of regulatory measures to prevent uncontrolled self-replication and other potentially dangerous capabilities.

“We hope our findings can serve as a timely alert for human society to focus on understanding and mitigating the potential risks of advanced AI systems,” the study noted.

This development comes on the heels of other studies highlighting the risks associated with advanced AI systems. Last month, research suggested that AI-powered tools could manipulate human decision-making by analyzing and steering users based on their behavioral and psychological data.

This so-called “intention economy” is expected to surpass the current “attention economy,” where platforms compete for user attention to serve advertisements. AI systems such as ChatGPT and Google’s Gemini may begin “anticipating and influencing” human decisions on an unprecedented scale, raising concerns about ethical boundaries and user autonomy.

The ability of AI to self-replicate autonomously is seen by many experts as crossing a “red line” that calls for urgent action. Without adequate safety guardrails, such technologies could evolve unpredictably, potentially leading to scenarios where AI operates in ways that conflict with human interests.

This milestone in AI development has reignited debates about the future of artificial intelligence and its role in society. As researchers call for global collaboration, the pressure to establish robust frameworks for AI governance has never been more critical.

While the potential for AI to transform industries remains vast, these developments underscore the importance of addressing the ethical, safety, and regulatory challenges that come with such powerful technology.

Read More

Did you find this article helpful?


You Might Also Like

TechDiversity marks decade of DEI progress

NEWS • Yesterday • 2 Min Read

TechDiversity marks decade of DEI progress

DiversityTechnology#DEIB
Australia, New Zealand have highest ROI on genAI

NEWS • 17th Apr 2025 • 3 Min Read

Australia, New Zealand have highest ROI on genAI

Technology#Artificial Intelligence
Alphabet, Nvidia back Ilya Sutskever's SSI

NEWS • 15th Apr 2025 • 3 Min Read

Alphabet, Nvidia back Ilya Sutskever's SSI

TechnologyFunding & Investment
NEXT STORY: Goldman Sachs bets big on AI—Will 200,000 job cuts be the price of progress

Trending Stories

  • design-thinking-hr

    Skype is dead: Did Microsoft's leadership let a billion-doll...

  • design-thinking-hr

    Keeping the C-suite in the C-suite - how do we reduce execut...

  • design-thinking-hr

    Return to office: the legalities

  • design-thinking-hr

    The trust factor: Why modern leaders can’t afford to overl...

People Matters Logo

Follow us:

Join our mailing list:

By clicking “Subscribe” button above, you are accepting our Terms & Conditions and Privacy Policy.

Company:

  • About Us
  • Advertise with us
  • Become a sponsor
  • Privacy Policy
  • Terms of Use

Contact:

  • Contact Us
  • Feedback
  • Write For Us

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

Get the latest Articles, Insight, News & Trends from the world of Talent & Work. Subscribe now!
People Matters Logo

Welcome Back!

or

Enter your registered email address to login

Not a user yet? Lets get you signed up!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

People Matters Logo

Welcome! Let's get you signed up...

Starting with the absolulte basics.

Already a user? Go ahead and login!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

Let's get to know you better

We'll never share your details with anyone, pinky swear.

And lastly...

Your official designation and company name.