There are many debates going on in the field of artificial intelligences. One of the most interesting is what divides researchers, philosophers and engineers between those who think it is possible for systems based on artificial intelligence to surpass humans in terms of cognitive and reasoning capacity and those who believe that these devices will never be able to surpass the limits of what they can be. it can be learned and replicated through training.
This dichotomy is somewhat reminiscent of the one that separated balloon and airplane adepts at the very beginning of the 20th century.
On November 21, 1783 Jean-François Pilatrê de Rozier and François Laurent, the Marquis de Arlandes, flew for almost half an hour.
Transported on a circular platform attached to the bottom of a hot air balloon, the duo only had to manually feed the fire through the openings on either side of the balloon’s skirt, which reached 150 meters in height and traveled almost nine kilometers without problems.
When disembarking from the mill created with paper and silk by the Montgolfier brothers, the Marquis and his companion on the adventure had to distribute bottles of champagne to the peasants, who feared that they were facing something from the devil.
Only ten days later, the first gas balloon ascended to the skies of Paris, launched by two brothers: the physicist Jacques Alexandre Cesar Charles and the inventor Nicolas-Louis Robert. It lasted 2 and a half hours and covered a distance of 40 kilometers.
Benjamin Franklin pointed out the possible usefulness of balloons in warfare just three months after the first manned balloon flights in France in 1783.
And indeed, gas balloons were over the trenches and were the main means of air transport until the invention of the fixed-wing aircraft by the Wright brothers in America in 1903 (yes, there was Santos Dumont before, but it was the Americans who guaranteed the jump of the cat).
But don’t think that the Wrights’ feat was received with pomp and circumstance. Mathematics and Astronomy professor Simon Newcomb, for example, thought that flying heavier-than-air machines was “impractical and insignificant, if not downright impossible.”
I wasn’t alone. The engineering editor of The Times in London said:
“All attempts at artificial aviation are not only dangerous to human life, but doomed to failure from an engineering point of view.”
Technological advancement has resulted in what it did: in 2020, if the pandemic had not started, 40 million human beings would have flown in some kind of device derived from the device that the Wright brothers took from the ground in 1903.
Balloons have taken on an almost romantic role and make appearances here and there as tourist or sporting experiences.
Returning to artificial intelligence, the field continues to advance, with new articles and systems being released constantly, the last piece of this mosaic is a product of DeepMind, a company specializing in the field that was acquired by Google in 2014.
Called Gato, DeepMind’s new paper explains the workings of Generalist Artificial Intelligence, which is what we call systems that don’t have a ‘field of expertise’ and can be used to produce different types of answers — producing a recipe, writing a advertising piece or even pieces of computer code.
Usually these systems are trained in a single ‘language’ so, although generalists, systems that generate text ONLY generate text, those that work and recognize images ONLY identify images.
The novelty in Gato is that it proposes to perform 604 different types of tasks, in completely different domains, so the same system can:
- write texts
- play atari
- recognize images
- Control a robotic arm and stack boxes
- Among others
This multiplicity of tasks is interesting, because even though the system has been trained with different data for each of the tasks, the algorithm and the weights used are the same, that is, the ‘brain’ is the same and it is capable of producing results for many different contexts, just like ours.
Until then, most generalist systems worked with a single specific domain. An AI capable of writing texts was unable to recognize an image and vice versa and the way around this was to connect different AIs for each context.
The quality of these results varies from task to task, but there are promising results. For example, the Cat proved to be a better Atari player than human players in 23 different games.
For comparison purposes, the researchers also trained an agent specifically to play video games, and the expert agent achieved superior results in 44 games.
Advances keep happening, with larger and more sophisticated systems emerging all the time, as well as new machine learning techniques.
It is a pulsating field, and each innovation discovered and quickly socialized ends up serving as a stepping stone for new innovations and discoveries in a virtuous cycle.
The central question remains open:
- Achieving the so-called ‘singularity’ — where the machine will get so smart that it can learn to learn and the artificial intelligence will increase its own intelligence exponentially — is just a matter of database size and processing power;
- Whether we will need new algorithms and techniques or;
- If, in fact, there is something unique about our brain that cannot be reproduced by machines.
For one of the authors of Gato, in reply to an article who analyzed artificial intelligence and considers that this is a dead end, is that the path is already given and now it is a matter of time and scale.
The eventual emergence of an artificial intelligence capable of learning and surpassing human intelligence generates other debates – ethical and security debates, with the most apocalyptic ones warning that this could be a path of no return to the very sovereignty of our species… but this is one theme for a next column 🙂