Mira Murati, CTO at OpenAI, acknowledges the importance of proper AI regulation. Her statements come as the EU Artificial Intelligence Regulation follows its legislative course, close to approval.
Mira Murati, OpenAI CTOhas recently recognized that ChatGPT, the artificial intelligence platform that drives and enables users to have conversations with an automated language model, needs regulation.
According to the expert, the lack of oversight and control over AI processing tools can have serious consequences, such as the spread of fake news and the spread of hate speech online.
In the recent interview with AP, Murati stressed the importance of regulation in the development of language processing technologiesstating that it is necessary to establish a ethical and legal framework to ensure these tools are used responsibly.
The expert also pointed out that the regulation must address a series of key issues, including transparency in algorithmic decision makingdata privacy and security, and liability for the social effects of chatbot language processing technologies.
The EU is working on an Artificial Intelligence Regulation that requires transparency
A frame of reference with a lot of academic literature since 2021 in the European Union, where work is already being done on the Artificial Intelligence Regulation. The European Parliament is finalizing a proposalwhich will have to be negotiated later with the other community co-legislator, the Council of the European Union, and could have the green light in a few months.
The first and maximum regulatory barrier that the EU debateswould have as key point transparency, a which Murati alludes to. The norm will dictate that those responsible for AI models in Europe will also have to offer guarantees that during the 10 years following the launch of the model that documentation will remain accessible, and guarantee transparency with their algorithms.
“When it comes to artificial intelligence, trust is a necessity, not a nice thing to have,” said Margrethe Vestager, Executive Vice President for a Digital Europe.
A classification system, whose fundamental base will allow assess the level of risk that an AI technology could pose to health, safety and fundamental rights of a person. The framework establishes four categories of risk: unacceptable, high, limited and minimal.
The proposal of the Artificial Intelligence Law provides for rigorous sanctions in case of non-compliance. In the case of companies, fines could be as high as 30 million euros or 6% of its global revenue. In addition, submitting misleading or false documentation to regulators can also result in fines.
Involve the academic, political and industrial world in the regulation of artificial intelligence
Mira Murati also wanted to underline the need to engage multiple stakeholders in developing effective regulations for ChatGPT and other similar tools. According to the expert, efforts are needed collaboration between academia, industry and government to address the challenges posed by these emerging technologies.
In addition, he urged researchers and developers to take into account the ethical and social concerns early in the development processand to work closely with ethicists and policy experts to ensure that technologies are designed and used responsibly.
Murati’s concern with the regulation of language processing technologies reflects a growing awareness of ethical and social challenges posed by these emerging tools.
As artificial intelligence becomes more and more advanced, the need for effective regulation is becoming ever more pressing. While artificial intelligence has the potential to improve human life in many ways, it can also have negative consequences, such as the invasion of privacy and the creation of mass surveillance systems.
Who is Mira Murati?
Mira Murati is a recognized engineer in the artificial intelligence (AI) industry. Currently, she is the CTO of OpenAI, leading AI research and development organization. In this role, Murati leads AI research efforts and is responsible for technology strategy and direction of the engineering team.
Before joining OpenAI, Murati worked at various tech companies, including Google, where he played a key role in the development of Google Maps and Google Photos. She was also the co-founder and CTO of an artificial intelligence startup called Xnor.ai, which was acquired by Apple in 2019.
In addition to her work in technology, Murati is also a advocate for diversity and inclusion in the tech industry and has spoken about the importance of increasing the representation of women and people of color in AI and other areas of technology.
Her work at OpenAI and other companies has helped drive the development of AI and has been recognized as a one of the most influential women in Silicon Valley.