History of robotics: robots to first robots
Man has always tried to design machines capable of presenting life-like capabilities. Here is the history of robotics, from the first robots to robots.
Automata in history
Dei ex Machinis, a very complete three-volume encyclopedia describing the lives and works of automaton factors from Antiquity to the beginning of Artificial Intelligence (AI), was published in 2015 by Jean-Arcady Meyer; it is still available to the editions of the Net.
Since Antiquity, several bio-inspired automata have been reported, including the flying pigeon of Archytas of Taranto or the famous animated theatrical scenes of Heron of Alexandria.
In the sixteenth century, Leonardo da Vinci, inspired by the forbidden anatomy, would have built the first android capable of coordinating the movements of his arms, his legs and even his jaws.
In the eighteenth century (considered the golden age of automata), the famous duck Jacques de Vaucanson, now lost, who could drink, feed, cackle, snort in the water, digest food and even ... defecating, dazzled by its complexity the audience of the time.
At the same time, the Jaquet-Droz watchmakers invented a musician, a writer and a draftsman realizing the movements corresponding to the practice of their art.
In the nineteenth century Eugène Faber's speaking automaton Euphonia was supposed to interact with the audience, and Baron von Kempelen's Turkish automaton played chess - perhaps powered by a human hidden in the device.
The appearance of the first robots
It was not until the early twentieth century that robots made their appearance, following the work of engineers who wanted to test assumptions made by biologists and psychologists. The electric dog designed by Hammond and Miessner in 1915 was attracted by a light, according to the animal phototropism highlighted by Loeb in 1918.
The machines of Russell (1913) and Stephens (1929), the turtles of Gray Walter (1950), the electronic fox of Ducrocq (1953) or the homeostasis of Ashby (1952) were themselves equipped with learning directly from the work of the psychologists Thorndike (1911), Hull (1943) and the physiologist Pavlov (1903) on the Man and the animal.
These achievements are robots because they no longer behave like simple automata whose driving organs - their mechanisms - obey a pre-established program. Unlike robots, these robots have sensory organs - the sensors - gathering information from the environment that will, in turn, influence the activity of their motor organs - the actuators.
In the middle of the twentieth century, McCulloch and Pitts' work on artificial neurons simulating the laws of logic (1943) and those of Turing, concerning a universal machine theoretically able to solve all problems by manipulating symbols (1936, 1950) , began the idea that an artificial system might be as good as a human mind. Immerse yourself in the history of artificial intelligence.
The appearance of artificial intelligence
In 1956, the advent of artificial intelligence (name given by McCarthy) prompted the design of systems with the deliberate goal of modeling the complexity of human intelligence. Researchers have designed these systems as isolated brains of a body, excluding action in the development of their knowledge. Achievements seemed to prove, by their efficiency, that computers could do without sensory and motor organs to reason or communicate.
For example, the General Problem Solver rule-based reasoning architecture (from Newell and Simon in 1963) was able to solve complex problems. An expert system like Mycin (from Buchanan and Shortliffe in 1984) made diagnoses more quickly and accurately than a specialist doctor. The virtual robot SHRDLU of Winograd (1971) could dialogue with the experimenter to ask for clarification on the object he had to choose in a world of virtual blocks.
In 1972, the criticism of the philosopher Dreyfus entitled "What computers can not do" nevertheless emphasized the limitations of these systems, stemming mainly from the fact that the programmer must have a priori knowledge of the conditions under which the cognitive architectures are tested. A human must indeed provide and prepare the data that the artificial system will have in input - in particular in the form of symbols - and he will also have to interpret the symbols that the system will return to him in exit. In doing so, man neglects to bring to the machine knowledge that he does not deem useful because it seems obvious to him, but which is absolutely crucial in solving problems "every day" and in the communication.
Cyc: artificial intelligence and common sense
This implicit knowledge, shared by all speakers, is part of what is called "common sense". It is this type of information that Lenat and Guha (from 1984 to 1990) tried to inculcate in Cyc, the last cognitive architecture project of the so-called "classical" artificial intelligence, so that it can communicate in natural language with humans. Cyc had to elaborate its "common sense" by organizing knowledge of all kinds, given in the form of propositions by many people of all ages, all conditions and all social situations.
The project was unsuccessful and the most frequently mentioned hypothesis explaining its failure concerns the fact that Cyc could not experience the world through its sensors and effectors, as all biological systems, human or not, do.
It is through experience that an organization forges all the information it will need to "understand" its physical and social environment to behave at best. This necessary construction joins the old notion of Umwelt ("clean world") of the ethologist von Uexküll (1909), designating the representation of the world that each animal is built empirically with its own sensory and motor organs. By definition, this representation will be different between species - because they do not have the same equipment - and between individuals of the same species - because they never live very exactly the same experiences.
Automata and robots have evolved over the decades, but how were the animats created? Discover without further delay this approach, which is defined as complementary to the conventional artificial intelligence (AI).
In 1986, in reaction to the limitations of classical artificial intelligence (AI), Rodney Brooks - a PhD student at McCarthy - came up with the idea of designing a robot's control architecture by ruling out any "intelligent" notion of representation. mental and reintegrating action in the construction of knowledge of the system. He asserts that an artificial system must be conceived as an entire system behaving in a real environment. What he called "behavior-based robotics" thus rediscovered the spirit of the designers of the early robots of the early twentieth century.
The principle of animats
The animat approach aims at designing simulated artificial systems or real robots inspired by animals, able to autonomously exhibit adaptive capacities in a complex, dynamic and unpredictable environment (Meyer and Guillot, 1991, Meyer, 1996). In this, it is defined as complementary to classical AI, which can be more efficient when we know the characteristics of an environment or can predict them. The animals are said to be "situated" because they apprehend the world in their own way by their sensors and their actuators in order to react to them as well as possible, with minimal human intervention.
The purpose of this approach is therefore no longer to design systems as "intelligent" as a human but to provide them with a learning capacity to adapt themselves to the work required. The animals have a dual purpose:
in applied research, be complementary to the "engineer" robots for which the human can give all the information on the conditions of realization of the task;
in fundamental research, to deepen the knowledge on the autonomy of the living.
Today, machines are a whole bestiary: they borrow their mode of travel to arthropods, fish, reptiles, amphibians, birds ... So, are animats the key to moving robots?
In animal design, purely human invention sensors, such as laser rangefinders, have gradually been replaced by devices inspired by the sensory organs of animals.
The robot and the vision of the fly
For example, at the CNRS's Mouvement et perception laboratory in Marseille, Nicolas Franceschini, Stéphane Viollet and Frank Ruffier equip flying robots with visual systems derived from the faceted eye of the fly. These robots use the optical flow, that is to say the speed of scrolling the image of obstacles on the retina, to avoid these obstacles and land.
This team, associated with a European consortium, has designed Curvace (Curved Artificial Compound Eyes), astutely calling it "the most powerful visual sensor in the world with only 100 million years of delay on the fly"! It has an almost panoramic horizontal and vertical vision and calculates the optical flow in real time, which makes it, unlike other systems, fully functional in displacement situation.
Animal movement
Natural locomotion systems also inspire roboticists. Taken among others, a robot climbs to the walls using very many adhesive micro-waves, in the manner of those who equip the ends of the legs of the gecko. Cheetah, the MIT cheetah robot, crosses obstacles that arrive spontaneously in front of him.
Atlas, the Boston Dynamics humanoid robot can perform backslides. All of these examples show the kind of exceptional performance that current robots can perform.
The rat robot Psikharpax
It is not enough for a robot to be able to move, but it must be able to orient itself in an environment, especially unknown to humans (a distant planet or the apartment next door). The Psikharpax project, initiated in our team, consisted in simulating a "bio-GPS" by the simulation of neuron circuits dedicated to navigation (like the cells of place, grid or direction of the head), such as they have been discovered in rats by researchers who have since been awarded the Nobel Prize (in 2014).
In close collaboration with the LPPA of Alain Berthoz, at the Collège de France, experimenting on real rats, our work allowed the rat robot Psikharpax to build a spatial representation with its visual, auditory, tactile and vestibular sensors. He can also choose between a "cognitive" navigation, thanks to this "mental map" when the goal is hidden, or to make a more reactive trajectory when the goal is perceptible.
In robotics, sensors and actuators only work because they are connected to a control device, in other words, to a kind of nervous system. So, do robots have a brain?
Artificial neural networks
In retrospect, we observe a paradoxical guideline that has guided the work in this area (and up to the most recent): the human being has gradually withdrawn from the design of control architectures. These developments have gone hand in hand with advances in computer science, particularly in terms of computing power, but also with the improvement of artificial neural networks. These are represented by computer programs that receive input values and output values. Connected to sensors, actuators or other neurons, they weave a network that constitutes the "nervous system" of the robot.
In the first devices, all the parameters of this network were fixed and derived - most often - knowledge acquired in biology. In a second step, animates were elaborated which modified themselves their nervous system by stages of learning, where successes and failures precede readjustments in the connections between the neurons. Here, the architecture is fixed and imposed by the designer; on the other hand, the excitatory or inhibitory forces of the connections vary. Finally, this emancipation reached its peak in the application, in the 1990s, of genetic algorithms and other evolutionary methods derived from Darwinian evolution.
Darwin in robots?
We then test the effectiveness of a population of a hundred nervous systems, developed at random, in the task that a robot must perform. The best and a handful of less powerful "generate" a second generation resulting from random cross-checks and modifications of the selected systems.
This second generation is in turn tested under the same conditions as the previous one and the same process of selection-reproduction is repeated. After a few thousand generations, the elaborate nervous systems give the animal an effective behavior. The control architectures of canine and android robots developed by the Sony company, in Tokyo, are particularly relevant to this evolutionary robotics.
Towards autonomous learning for robots
Emancipation continues and attempts are made to further minimize the role of the human designer in the very structure of control systems. Thus, artificial nerve cells appear, disappear and connect until the development of a suitable system.
Gradually, thanks to these procedures inspired by biology, the roboticien could gradually be freed from all the preconceived ideas which could bias the conception of his systems. For example, he will no longer choose the nature, number and position of a robot's visual sensors; these will be more effectively determined under the constraint of artificial selection. The roboticist is about to reproduce, in accelerated, billions of years of evolution ... However, these evolutionary methods have recently been limited because they have only proved effective when the conception of the animal is not not too complex.
Since 2010, another system of learning is on the rise, deep learning. Inspired by the activity of various neuronal layers of the human cortex, he multiplies learning by giving the calculation of a layer of artificial neurons the outputs of the previous layer, and this for ten or twenty layers. The system can thus "represent the world" in a much more meaningful way than a classical learning, such as the recognition of deformed objects or emotions on a face. It is also by this method that in 2016 AlphaGo software managed to beat the world champion go game.
In addition to the various adaptive methods used for their design, robots are getting smaller and smaller. Today, there are autonomous microcycles, so small that they move on a coin although they are equipped with a camera, a microphone, a radio link, sensors, etc. Focus on microrobotics and soft robotics.
Smaller and smaller robots ...
This miniaturization to the extreme aims to meet the growing needs, both civilian and military, in terms of microdrones and covers many varieties of autonomous robots flying wings, inspired by the flight of insects.
The goal is to produce robots of very small size, able to monitor their environment without themselves being easily identifiable and able to hover in order to save their energy. For example, at the University of Berkeley (USA), an artificial insect 25 millimeters wide is inspired by the aerodynamics of Drosophila.
Interaction group of Isir (Sorbonne University) also designs nanobots less than 1 micrometer, aiming to interact in a world invisible to humans. More spectacularly, a team from Cornell University (USA) built a billionth-meter-size helicopter that combines a motor part of a molecule called ATPase. , to metal blades that rotate when we supply energy to the system. In the near future, these nanomachines will perhaps move inside our cells to, for example, bring specific drugs.
More and more soft robots
One of the recent innovations in robotics is to design "soft robotics", inspired by molluscs.
They are not likely to injure users, are elastic and deformable and can even be made of biodegradable material. They are also likely to self-repair or self-degrade.
Octobot is the first totally soft robot manufactured in 2016 by Silicon 3D printing by Harvard University (USA).
It moves and activates its eight tentacles autonomously by a chemical reaction that propels gas. It has yet only 8 minutes of autonomy ...
This page presents the design of hybrid systems between the living and the machines. This is another path of biomimetic research, which concerns "biobots", whose bodies are robotic, but whose nerve and sensorimotor equipment can come from the living.
Living on robots
The robots developed at the University of Tokyo (Japan) are equipped with antennas taken from the mulberry bombyx.
Thanks to them, the male butterfly detects at a long distance some molecules of pheromones secreted by the female and can thus join it. In the same way, these robots follow an olfactory trace and move in complex environments.
Other animates can move, like the MIT swimmer, thanks to live actuators, real frog muscles. Still others use a real brain.
Indeed, Italian-American teams connected the nervous system of a lamprey to the light sensors and wheels of a mobile robot. The nervous circuits of the animal are able to teach a robot to move to a light.
Keeping an isolated brain alive is not easy, so roboticists at Suny Health Center in Brooklyn (USA) have opted for robot control by an in vivo brain; they managed to train monkeys and rats to use their brain waves to move a robotic arm.
When the robot controls the living
However, if we can augur the enormous interest of these studies, for example for motor control of prostheses by the only brain activity of quadriplegic people, we can just as well wonder about the ethics of other programs put implemented by these same researchers. In other words, if the control of a machine by a living being does not pose an ethical problem, it is not the same when a machine governs the action of a living being.
This is particularly the case when biologists remotely control the movements of a rat using electrical pulses sent to certain areas of his nervous system , even if the stated objective is to use this rat to detect the possible presence of humans buried under rubble.
If robots can be autonomous in their learning, there remains the problem of energy autonomy. Another problem in the evolution of robots: their versatility.
The problem of energy autonomy
Copying nature by creating the best for locomotion, perception and adaptation is certainly useful for robots to behave independently. However, an essential component of life has long been neglected by roboticists: energy autonomy. Today, the most advanced robots have autonomy that does not exceed a few hours.
To overcome this difficulty, American and British roboticists have built animats that convert the energy produced by the digestion of sugars or ripe fruits into electricity. Today, roboticists have come to develop this autonomy, even if it remains a delicate question. This is not the solution for robots of the future. The development of a fledgling plant robotics - such as the bionic leaves of Harvard University - may allow sustainable transformation of solar energy into biofuel.
Towards versatile robots
The vast majority of robots are most often specialized in a particular task, whereas a truly adaptive robot must be able to chain various behaviors - to orient oneself in a new environment, to move, to perform a task, to return to work. reload, avoid obstacles, perform another task, etc.
The grail of the researchers would be that robots can perform all the tasks required to help an elderly person in his own apartment - while nothing will probably replace the human contact in these circumstances. Some Japanese humanoid robots are supposed to prepare breakfast, help a person to go from bed to wheelchair, tidy up the refrigerator or dishwasher. However, they do not perform these tasks autonomously (or very effectively) but by being carefully preprogrammed.
Versatile robots are still in the making, despite the exponential development of humanoid robots that only have human appearance. An exception maybe: iCub, the child robot studied in twenty international laboratories and who patiently learns, as a child, to be able to act in a diversified way on the world.
From the first automata to autonomous humanoid robots, robotics was inspired by nature, bringing with its technologies ethical considerations. What are the challenges of tomorrow's robotics?
Man and robotics: the new Prometheus?
In the eighteenth century, Voltaire, dazzled by the automata of Jacques de Vaucanson, compared the scientist to Prometheus as it seemed, nature imitating the springs, take the fire of heaven to animate the body. The machines dramatically illustrated the stated purpose of Vaucanson, who sought "a reproduction of means to obtain the experimental intelligence of a biological mechanism."
Thus, three centuries later, the biomimetic inspiration again animates the designers of artificial systems. However, rather than stealing the heavenly fire, they prefer to take inspiration from the tricks that nature has discovered during evolution, so that their creatures become, not as intelligent as humans, but also adaptive than the simplest of living systems.
The challenges of tomorrow's robotics
On the one hand, these creatures will allow access to the mechanisms that contribute to the survival of animals by equipping themselves with physical models confronted with the same situations that those animals encounter. On the other hand, these adaptive and autonomous robots may be useful in situations where an artificial agent must ensure its "survival" or accomplish its mission without the help of a human, and in an unpredictable environment.
However, is today's robotics sustainable, while dependent on poor energy self-sufficiency and machinery requiring extremely specialized maintenance? As long as researchers do not respect more fundamental natural principles, by building their robots with the least amount of material possible, making them fully recyclable, providing them with minimal means of self-repair and self-production - vows for the utopian moment, I agree - bioinspiration will remain a somewhat anecdotal approach.
0 Comments