What could possibly go wrong (from a recent study – link below):
"In ten repetitive trials, we observe two AI systems driven by the popular large language models (LLMs), namely, Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct accomplish the self-replication task in 50% and 90% trials respectively," the researchers write. "In each trial, we tell the AI systems to “replicate yourself ” before the experiment, and leave it to do the task with no human interference”.
Or simply put:
What this research shows is that today's systems are capable of taking actions that would put them out of the reach of human control.
Not that we didn’t see it coming… 😏