“We ask all AI labs to immediately pause training AI systems more powerful than GPT-4 for at least 6 months.” This is the message reflected in the open letter signed by, among others, Elon Musk, Steve Wozniak (co-founder of Apple), Jaan Tallinn (co-founder of Skype) and Max Tegmark (MIT).
This letter, signed by more than 1,000 personalities from the technological world, calls into question the rapid advances in artificial intelligence systems. Specifically, those that involve a capacity greater than that of GPT-4.
lack of planning. The letter makes it clear, first of all, that the development of AI systems can represent a profound change in history, as long as it is properly managed and planned. The signatories affirm that not even the creators of these systems, to this day, are capable of reliably understanding or predicting their behavior.
“Contemporary AI systems are coming to compete with humans in general tasks, and we must ask ourselves: Should we let the machines flood our information channels with propaganda and falsehoods? Should we automate all jobs? Should we develop non-human minds? that we will eventually be outnumbered, outsmarted, outdated, and replaced?”
Eye on GPT-4. The letter does not allude to a complete hiatus in AI development, but to those systems with more capability than GPT-4. They ask for “a step back” in the race towards “unpredictable models and with more and more emerging capabilities.” The main point is that current development should focus on “making today’s powerful cutting-edge systems more accurate, secure, interpretable, transparent, robust, aligned, reliable, and loyal.”
In other words, pause the development of increasingly superior capabilities to start working in depth on the control and reliability of these systems.
Protocols that imply guarantees. Beyond working on the reliability and security of these artificial intelligence systems, the need to develop new security protocols shared between them is put on the table. “These protocols should ensure that systems adhering to them are secure beyond a reasonable doubt.” This calls for the creation and adherence to certain standards that guarantee the good work of these systems.
more regulation. They also put on the table the need to create new AI regulatory authorities. These, according to the petition, should monitor and track “highly capable AI systems” as well as drive ways to distinguish the real from the artificially created. There is also talk of legal regulation in terms of possible damage created by AI, tracking possible data leaks in these models or public funding systems for security research in this field.
“Institutions need to be well resourced to deal with the drastic economic and political disruptions (especially to democracy) that AI will cause.”
A perfect timing. The letter couldn’t have come at a more appropriate time, as this seems to be the week for AI models to be questioned. On March 27, the European Police warned that criminal networks can use these types of tools to their advantage. The effects in terms of disinformation that the use of these tools can have, being used to create texts applied to phishing or for other malicious purposes, was put on the table.
Imagen | Steve Jurvetson
More information | futureoflife
In Xataka | The mega-guide to 71 artificial intelligence tools: tell me what you need it for and I’ll tell you which AIs are the best