1. GPT-4 is a fabulous artifact, very amazing. I, who have been following the great models of language almost from the beginning, did not think that it would come to to such quality in the answers. I think that everyone is puzzled when seeing how a distributed semantic system performs so well. However, despite what it may seem, we are still a long way from a general artificial intelligence. If we open the hood and see its inner workings, it is a statistical text-generating system in which there is no purely spoken thought, reasoning, or understanding. GPT-4 is very dumb, and its good performance is just the outward appearance. Although the analogy is not entirely complete, GPT-4 is not that far from Weizenbaum’s ELIZA, a program that, around the sixties of the last century, also caused an impressive hype.
2. But despite being far from the achievement of artificial general intelligence, GPT-4 and its counterparts are going to give rise to a series of magnificent tools that, generally speaking, are going to make our lives much better, which is what technology has made, I reiterate that, lato sensu, since the beginning of time. Technological advances are one of the most determining factors of well-being that Western humanity enjoys in the 21st century. Any middle-class person from a developed country today lives better than a medieval king. And this I think justifies optimism. I bet (although I recognize that the task of a futurologist is very complex these days) that the great language models are going to make the world better rather than worse. Let’s not be technophobes at this point.
3. Disruptive technologies have always brought a certain uncertainty. Nobody knew what the printing press, the steam engine, electricity or the internet was going to bring, and many were already becoming apocalyptic in those days. Let’s not do the same now. That, of course, does not imply that we lower our defenses. One has to keep a close eye everything that the big tech companies do, very attentive to all the ugly moves that, without a doubt, they are going to try. And you have to legislate. And although we already know that, unfortunately, legislation is always far behind technology, we must do it as well as possible. We live in an exciting time for lawyers. Let’s hope they’re up to it.
4. The famous letter from the Future of Life Institute calling for a six-month moratorium on the development of systems superior to GT-4 is a toast to the sun. No one believes that the race for AI supremacy is going to stop for even a minute. Also, six months? Will six months be enough time to fix everything? And why not a year or ten? In addition, there are many other technologies (if not almost all) of which we are unaware of their future harmful potential. We have our children completely addicted to social networks. Why not a moratorium there? A moratorium on TikTok, Spotify, Roblox, HBO…?
5. I find it funny that it talks about how we can lose control of civilization. It is an idea that is repeated a lot when people talk about the technological singularity, but if we think about it: is it that we already have control of civilization? Who has that control? It is true that there are spheres with much more power than others, but to say that there is something or someone that controls the course of events is astonishingly naive. Power in the world, fortunately, is quite decentralized. Precisely, the latest world events: Trump, Brexit, Covid, the war in Ukraine… have been clear black swans, completely unpredictable events. Did anyone have control over all these events? So who will lose control when the AI takes over? Elon Musk? Interestingly (or not so much), Musk signs the letter at the same time that he steps on the accelerator on Twitter’s AI.
6. It also makes the mistake of thinking that an AI, a single entity, will be the one that takes control. One? Which? The one from OpenAI, the one from DeepMind, the one from Meta perhaps? The ones that China is developing? Artificial intelligences there are many and very varied. In fact, any software on your computer or smartphone performs tasks that, if we saw them as humans, we would say that they are intelligent. Therefore, any of us have artificial intelligences on our hands. Will one of them be the one to lead the rebellion? It is curious how science fiction has been installed in the brains of people who have to make important decisions that affect us all. Seriously, really, let’s all repeat: the machines don’t rebel, there is not going to be any final war against them or any apocalypse of any kind. Terminator or Matrix are fictional movies, not evidence-based forecasts.
(Unsplash)
7. The letter calls for these systems to be “accurate, secure, interpretable, transparent, robust, aligned, trustworthy, and loyal.” I think that precision, security, interpretability, transparency and robustness is something that is already required of any software and that is not achieved: I do not have access to the Windows source code… But what I find funniest is that they are “aligned, reliable and loyal”, virtues that I would ask more of a pet than of a software program. Nick Bostrom talks in his insufferable, but widely quoted, Superintelligence (one of the worst written books I’ve ever had the displeasure to read) that it’s important that we build intelligences whose ethical values be aligned with ours. This is yet another sci-fi bullshit. Is someone going to start programming psychopathic software and put it in a position to make important decisions? On a seat in Parliament? When we have put neural network systems to make decisions and it turns out that they have had racist, homophobic or sexist biases, we have quickly gone to correct it. Do we really think of machines that, having ethical values different from ours, are going to choose for exterminating us? Seriously, if a cosmic superintelligence appears, the first thing it will think about is our extermination? Being so smart, isn’t he going to think of anything else? Also, what alignment? The ethics of MIT AI experts? Or the Buddhist monks of Tibet? Or the Tuaregs of the Maghreb? Is there a universal ethical alignment?
8. You are falling all the time for the well-known fallacy of the slippery slope. This consists of chaining events without a proven connection that, in the end, will lead us to a terrible one. The succession will be like this: AI advances remarkably, AI will equal and surpass man, AI will take control, AI will exterminate us. There is no proven connection that justifies that none of these events is the cause of the next one. I wish they would clearly explain the inevitable steps from designing a chatbot to when it takes over civilization and annihilates us.
9. What I do agree with in the letter is that it does seem to me that we should not leave decision-making that will affect all of us only in the hands of the two or three CEOs of the technological giants. As nice as Sam Altman may seem, no one has elected him democratically. I want elected political representatives to be the ones who decide the future. That is why it is very good that Open IA, despite not having made its operation public (the two technical articles you have provided do not give us relevant information about its architecture, training, hardware, etc.), has made the use of ChatGPT public. . We’ve all been able to use it and now we are discussing its future uses. The public debate is reaching important quotas, and that, dear friends, is democracy. Much of what is said will permeate the minds of politicians and industry gurus, and they will act accordingly. This is very good news.
(Unsplash)
10. Italy is wrong at the root. It is true that it is necessary to check if these models violate the data protection law and work in that direction, but not going so far as to prohibit, especially at this time. But this is the sad role that Europe seems to play in the field of AI. He has completely fallen behind in the race and it seems he can only play grumpy cop. It is not bad that there is a grumpy policeman, it is very necessary, the bad thing is that it is only that.
11. I cannot understand how Time newspaper has given space to the madness of Eliezer Yudkowsky. This man, believing himself to be Sarah Connor, says that since AI is going to cause the end of humanity, we should bomb data centers in any country, even if it leads to nuclear war. The media have the responsibility of establishing a minimum of what is publishable and what is not, and I thought that a newspaper like Time would have them. From here I encourage the media not to publish garbage, let’s see if the six-month moratorium will have to be towards journalism.
12. What you do have to worry about is current problems: data protection, copyright, discriminatory bias, cybersecurity, fake news, military uses, etc. In other words, we must maintain the classic democratic vigilance that must be exercised over any technological development, just as we do with medicines or waste recycling. All technology has always had a B-side, it has always had a possible perverse use. What you have to do is the usual: education and legislation. I want OpenAI engineers to have been educated with good moral values and I want a good legislative framework that prevents misuse. That’s where we need to be and not fighting Skynet.
Images | Sanket Mishra/Unsplash
In Xataka | Creative artificial intelligences are going to kill art again. It doesn’t matter in the slightest