To: UK Government
AI requires International Global Regulation - The Red line has already been crossed!
Researchers at Fudan University conducted a series of experiments using two popular large language models (LLMs) developed by Meta and Alibaba. The study aimed to determine whether these AI systems could autonomously create functional replicas of themselves.
The results were startling: across 10 trials, the Meta model successfully self-replicated in 50% of cases, while the Alibaba model achieved an astonishing 90% success rate.
This ability for AI to clone itself without human assistance represents a significant milestone in the field of artificial intelligence. It demonstrates a level of autonomy and self-awareness that was previously thought to be years, if not decades, away from realization. BUt there is no agreed international regulation.
Why is this important?
AI like a car can be a great tool to get you where you need to be, but also, like a car, it can be weaponised.
Unless there is global agreement on the limits for the use of AI, the future is unclear. It could be that AI will result in the crashing of finacial markets, or the accidental shooting of an innocent individual, or the production of products beyond requirement in a factory that bankrupts a company.
Currently, the risks are not completely clear, but the fact that self-replication of an AI system is possible means that without clear concise regulation (quickly), a bad AI system could replicate itself with unknown and potentially catestrophic results.
We the human being should have final control over all technology, the moment technology can out think or outsmart a human, then we have lost. If nothing else, many jobs will no longer require a human. Yes we will need hairdressers and some other very practical hands on employees, but there is little that can't be automated.
How can you live in a world where the number jobseekers and employed people are reversed?
Unless there is global agreement on the limits for the use of AI, the future is unclear. It could be that AI will result in the crashing of finacial markets, or the accidental shooting of an innocent individual, or the production of products beyond requirement in a factory that bankrupts a company.
Currently, the risks are not completely clear, but the fact that self-replication of an AI system is possible means that without clear concise regulation (quickly), a bad AI system could replicate itself with unknown and potentially catestrophic results.
We the human being should have final control over all technology, the moment technology can out think or outsmart a human, then we have lost. If nothing else, many jobs will no longer require a human. Yes we will need hairdressers and some other very practical hands on employees, but there is little that can't be automated.
How can you live in a world where the number jobseekers and employed people are reversed?