The human brain: an unknown biological computer. The brain is like a computer: bad at math, but good at everything else

The central idea of ​​​​the works of the famous Ray Kurzweil is artificial intelligence, which will eventually dominate all spheres of people's lives. In his new book, The Evolution of the Mind, Kurzweil reveals the endless potential of reverse engineering. human brain.

In the same article, Turing reported another unexpected discovery regarding unsolvable problems. Unsolvable problems are those that are well described by a unique solution (which can be shown to exist), but (which can also be shown) cannot be solved by any Turing machine (that is, by any machine at all). The idea of ​​the existence of such problems fundamentally contradicts the concept that was formed at the beginning of the 20th century. the dogma that all problems that can be formulated are solvable. Turing showed that the number of unsolvable problems is no less than the number of solvable problems. In 1931, Kurt Gödel came to the same conclusion when he formulated the “incompleteness theorem.” This is a strange situation: we can formulate a problem, we can prove that it has a unique solution, but at the same time we know that we will never be able to find this solution.

Turing showed that computing machines operate on the basis of a very simple mechanism. Since a Turing machine (and therefore any computer) can determine its next function based on its previous results, it is capable of making decisions and creating hierarchical information structures of any complexity.

In 1939, Turing designed the Bombe electronic calculator, which helped decipher messages compiled by the Germans on the Enigma coding machine. By 1943, a team of engineers with Turing's participation had completed the Colossus machine, sometimes called the first computer in history. This allowed the Allies to decipher messages created by a more sophisticated version of Enigma. The Bombe and Colossus machines were designed to perform a single task and could not be reprogrammed. But they performed their function brilliantly. It is believed that partly because of them, the Allies were able to anticipate German tactics throughout the war, and the Royal Air Force was able to defeat the Luftwaffe forces three times larger than them in the Battle of Britain.

It was on this basis that John von Neumann created the computer. modern architecture, reflecting the third of the four most important ideas of information theory. In the nearly seventy years since then, the basic core of this machine, called the von Neumann machine, has remained virtually unchanged - just like the microcontroller in your washing machine, and in the largest supercomputer. In an article published on June 30, 1945, entitled "First Draft Report on EDVAC", von Neumann outlined the basic ideas that have guided the development of computer science ever since. In von Neumann's machine there is CPU, where arithmetic and logical operations are performed, a memory module in which programs and data are stored, mass memory, a program counter, and input/output channels. Although the article was intended for internal use as part of the project, it became the Bible for computer creators. This is how sometimes a simple routine report can change the world.

The Turing machine was not intended for practical purposes. Turing's theorems were not concerned with the efficiency of problem solving, but rather described the range of problems that could theoretically be solved by a computer. On the contrary, von Neumann's goal was to create the concept real computer. His model replaced the one-bit Turing system with a multi-bit (usually a multiple of eight bits) system. A Turing machine has a serial memory tape, so programs spend a lot of time big time to move the tape back and forth for recording and retrieval intermediate results. In contrast, in a von Neumann system, memory is accessed randomly, allowing you to immediately retrieve any required data.

One of von Neumann's key ideas is the concept of the stored program, which he developed ten years before the creation of the computer. The essence of the concept is that the program is stored in the same random access memory module as the data (and often even in the same block of memory). This allows you to reprogram the computer to solve different problems and create self-modifying code (in the case of recording drives), which makes recursion possible. Until that time, almost all computers, including Colossus, were created to solve specific tasks. The concept of stored program allowed the computer to become truly universal machine, corresponding to Turing's idea of ​​the universality of machine computing.

Another important property of a von Neumann machine is that each instruction contains an operation code that specifies an arithmetic or logical operation and the address of the operand in the computer's memory.

Von Neumann's concept of computer architecture was reflected in the EDVAC project, on which he worked with Presper J. Eckert and John Mauchly. The EDVAC computer did not become operational until 1951, when other stored program computers already existed, such as the Manchester Small Experimental Machine, ENIAC, EDSAC and BINAC, all of which were created under the influence of von Neumann's paper and with the participation of Eckert and Mauchly. Von Neumann was also involved in the development of some of these machines, including the latest version of ENIAC, which used the stored program principle.

The von Neumann architecture computer had several predecessors, but none of them - with one unexpected exception - can be called a true von Neumann machine. In 1944, Howard Aiken released the Mark I, which was reprogrammable to some extent, but did not use a stored program. The machine read the instructions from the punched card and carried them out immediately. The car also did not provide for conditional transitions.

In 1941, German scientist Konrad Zuse (1910–1995) created the Z-3 computer. He also read the program from tape (in in this case encoded on tape) and also did not perform conditional jumps. Interestingly, Zuse received financial support from the German Institute of Aircraft Engineering, which used this computer to study the flutter of an aircraft wing. However, Zuse's proposal to finance the replacement of relays with radio tubes was not supported by the Nazi government, which considered the development of computer technology "not of military importance." This, it seems to me, is to a certain extent influenced the outcome of the war.

In fact, von Neumann had one brilliant predecessor, and he lived a hundred years earlier! English mathematician and inventor Charles Babbage (1791–1871) described his Analytical Engine in 1837, which was based on the same principles as von Neumann's computer and used a stored program printed on punched cards on jacquard weaving machines. The random access machine's memory contained 1,000 words of 50 decimal places each (equivalent to approximately 21 kilobytes). Each instruction contained an opcode and an operand number - just like in modern computer languages. The system did not use conditional branches or loops, so it was a true von Neumann machine. Completely mechanical, it apparently surpassed both the design and organizational capabilities of Babbage himself. He created parts of the machine, but never launched it.

It is not known for certain whether 20th-century computer pioneers, including von Neumann, were aware of Babbage's work.

However, the creation of Babbage's machine marked the beginning of the development of programming. The English writer Ada Byron (1815–1852), Countess of Lovelace, the only legitimate child of the poet Lord Byron, became the world's first computer programmer. She wrote programs for Babbage's Analytical Engine and debugged them in her head (since the computer never worked). Now programmers call this practice table checking. She translated an article by Italian mathematician Luigi Menabrea about the Analytical Engine, adding her own significant comments and noting that “the Analytical Engine weaves algebraic patterns like a jacquard loom weaves flowers and leaves.” She may have been the first to mention the possibility of creating artificial intelligence, but concluded that the analytical engine "is not capable of coming up with anything on its own."

Babbage's ideas seem amazing considering the era in which he lived and worked. However, by the middle of the 20th century. these ideas were practically forgotten (and only rediscovered later). It was von Neumann who invented and formulated the key principles of computer operation in his modern form, and it is not for nothing that the von Neumann machine continues to be considered the main model of a computer. However, let's not forget that the von Neumann machine constantly exchanges data between separate modules and inside these modules, so that it could not be created without Shannon’s theorems and the methods that he proposed for the reliable transmission and storage of digital information.

All this brings us to the fourth important idea, which overcomes Ada Byron's conclusions about the inability of computers to think creatively and allows us to find the key algorithms used by the brain, which can then be used to turn a computer into a brain. Alan Turing formulated this problem in the article " Computing machines and Mind,” published in 1950, which describes the now well-known Turing test to determine the proximity of AI to human intelligence.

In 1956, von Neumann began preparing a series of lectures for the prestigious Silliman Readings at Yale University. The scientist was already ill with cancer and was unable to deliver his lectures or even finish the manuscript on which the lectures were based. Nevertheless, this unfinished work is a brilliant prediction of what I personally perceive as the most difficult and important project in the history of mankind. After the scientist’s death, in 1958, the manuscript was published under the title “Computer and Brain.” It so happened that the last work of one of the most brilliant mathematicians of the last century and one of the founders of computer technology turned out to be devoted to the analysis of thinking. This was the first serious study of the human brain from the point of view of a mathematician and computer scientist. Before von Neumann, computer technology and neuroscience were two separate islands with no bridge between them.

Von Neumann begins the story by describing the similarities and differences between a computer and the human brain. Considering the era in which this work was created, it seems surprisingly accurate. The scientist notes that the output signal of a neuron is digital - the axon is either excited or remains at rest. At that time it was far from obvious that the output signal could be processed analoguely. Signal processing in the dendrites leading to the neuron and in the body of the neuron is analog, and von Neumann described this situation using a weighted sum of the input signals with a threshold value.

This model of how neurons function led to the development of connectionism and to the use of this principle to create both hardware and computer programs. (As I described in the previous chapter, the first such system, namely the IBM 704 program, was created by Frank Rosenblatt of Cornell University in 1957, just after the manuscript of von Neumann's lectures became available.) Now we have more complex models that describe combinations of neuronal input signals, but general idea about analog signal processing by changing the concentration of neurotransmitters is still true.

Based on the concept of the universality of computer computing, von Neumann came to the conclusion that even with the seemingly radical difference in the architecture and structural units of the brain and the computer, using the von Neumann machine we can simulate the processes occurring in the brain. The converse postulate, however, is not valid, since the brain is not a von Neumann machine and does not have a stored program (although in the head we can simulate the operation of a very simple Turing machine). The algorithms or methods of functioning of the brain are determined by its structure. Von Neumann rightly concluded that neurons could learn appropriate patterns based on input signals. However, in von Neumann's time it was not known that learning also occurs through the creation and destruction of connections between neurons.

Von Neumann also pointed out that the speed of information processing by neurons is very low - on the order of hundreds of calculations per second, but the brain compensates for this by simultaneously processing information in many neurons. This is another obvious but very important discovery. Von Neumann argued that all 10 10 neurons in the brain (this estimate is also quite accurate: according to today's ideas, the brain contains from 10 10 to 10 11 neurons) process signals at the same time. Moreover, all contacts (on average from 10 3 to 10 4 per neuron) are counted simultaneously.

Considering the primitive state of neuroscience at the time, von Neumann's estimates and descriptions of neuronal function are remarkably accurate. However, I cannot agree with one aspect of his work, namely the idea of ​​the brain's memory capacity. He believed that the brain remembers every signal for life. Von Neumann estimated the average human lifespan at 60 years, which is approximately 2 x 10 9 seconds. If each neuron receives approximately 14 signals per second (which is actually three orders of magnitude lower than the true value), and there are 10 10 neurons in total in the brain, it turns out that the brain's memory capacity is about 10 20 bits. As I wrote above, we remember only a small part of our thoughts and experiences, but even these memories are not stored as low-level bit-by-bit information (like in a video), but rather as a sequence of higher-order images.

As von Neumann describes each mechanism in brain function, he simultaneously demonstrates how a modern computer could perform the same function, despite the apparent difference between the brain and the computer. The analog mechanisms of the brain can be modeled using digital mechanisms, since digital computing can model analog values ​​with any degree of accuracy (and the accuracy of the transfer analog information quite low in the brain). It is also possible to simulate the massive parallelism of brain function, given the significant superiority of computers in serial computation speed (this superiority has increased even further since von Neumann). In addition, we can carry out parallel signal processing in computers using parallel von Neumann machines - this is how modern supercomputers operate.

Given the ability of humans to make rapid decisions at such low neural speeds, von Neumann concluded that brain functions cannot involve long sequential algorithms. When a third baseman receives the ball and decides to throw it to first rather than second base, he makes this decision in a fraction of a second - during which time each neuron barely has time to complete several cycles of excitation. Von Neumann comes to the logical conclusion that the brain's remarkable ability is due to the fact that all 100 billion neurons can process information simultaneously. As I noted above, the visual cortex makes complex inferences in just three or four cycles of neuronal firing.

It is the significant plasticity of the brain that allows us to learn. However, the computer has much greater plasticity - its methods can be completely changed by changing the software. Thus, a computer can imitate the brain, but the reverse is not true.

When von Neumann compared the massively parallel capabilities of the brain with the few computers of the time, it seemed clear that the brain had much greater memory and speed. Today, the first supercomputer has already been constructed, according to the most conservative estimates, satisfying the functional requirements needed to simulate the functions of the human brain (about 10 16 operations per second). (In my opinion, computers of this power will cost around $1,000 in the early 2020s.) In terms of memory capacity, we've moved even further. Von Neumann's work appeared at the very beginning of the computer era, but the scientist was confident that at some point we would be able to create computers and computer programs that could imitate the human brain; that is why he prepared his lectures.

Von Neumann was deeply convinced of the acceleration of progress and its significant impact on people's lives in the future. A year after von Neumann's death, in 1957, his colleague mathematician Stan Ulam quoted von Neumann as saying in the early 1950s that “every acceleration of technological progress and changes in the way people live gives the impression that some major singularity in history is approaching.” a human race beyond which human activity as we know it today can no longer continue.” This is the first known use of the word "singularity" to describe human technological progress.

Von Neumann's most important insight was the discovery of similarities between the computer and the brain. Note that part of human intelligence is emotional intellect. If von Neumann's guess is correct and if we agree with my statement that a non-biological system that satisfactorily reproduces the intelligence (emotional and other) of a living person has consciousness (see the next chapter), we will have to conclude that between the computer (with correct software) And conscious There is a clear similarity in thinking. So, was von Neumann right?

Most modern computers are completely digital machines, whereas the human brain uses both digital and analog techniques. However, analogue methods can be easily reproduced digitally with any degree of accuracy. American computer scientist Carver Mead (b. 1934) showed that the analogue methods of the brain could be directly reproduced in silicon, and implemented this in the form of so-called neuromorphic chips. Mead demonstrated that this approach could be thousands of times more effective than digitally simulating analogue methods. If we're talking about about encoding redundant neocortical algorithms, it might make sense to use Mead's idea. An IBM research team led by Dharmendra Modhi is using chips that mimic neurons and their connections, including their ability to form new connections. One of the chips, called SyNAPSE, directly modulates 256 neurons and approximately a quarter of a million synaptic connections. The goal of the project is to simulate a neocortex consisting of 10 billion neurons and 100 trillion connections (equivalent to the human brain), using only one kilowatt of energy.

More than fifty years ago, von Neumann noticed that processes in the brain occur extremely slowly, but are characterized by massive parallelism. Modern digital circuits operate at least 10 million times faster than the brain's electrochemical switches. In contrast, all 300 million recognition modules of the cerebral cortex act simultaneously, and a quadrillion contacts between neurons can be activated at the same time. Therefore, to create computers that can adequately imitate the human brain, adequate memory and computing performance are required. There is no need to directly copy the architecture of the brain - this is a very inefficient and inflexible method.

What should the corresponding computers be like? Many research projects are aimed at modeling hierarchical learning and pattern recognition occurring in the neocortex. I myself am doing similar research using hierarchical hidden Markov models. I estimate that modeling one recognition cycle in one recognition module of the biological neocortex requires about 3000 calculations. Most simulations are built on significantly fewer calculations. If we assume that the brain carries out about 10 2 (100) recognition cycles per second, we get total number 3 x 10 5 (300 thousand) calculations per second for one recognition module. If we multiply this number by the total number of recognition modules (3 x 10 8 (300 million, according to my estimates)), we get 10 14 (100 trillion) calculations per second. I give roughly the same meaning in the book The Singularity is Near. I predict that functional brain simulation requires speeds of 10 14 to 10 16 calculations per second. Hans Moravec's estimate, based on extrapolation of data for initial visual processing throughout the brain, is 10 14 calculations per second, which is the same as my calculations.

Standard modern machines can run at speeds of up to 10 10 calculations per second, but with the help of cloud resources their productivity can be significantly increased. The fastest supercomputer, the Japanese K computer, has already reached a speed of 10 16 calculations per second. Considering the massive redundancy of neocortex algorithms, good results can be achieved using neuromorphic chips, as in the SvNAPSE technology.

In terms of memory requirements, we need about 30 bits (about 4 bytes) for each pin with one of the 300 million recognition modules. If an average of eight signals are suitable for each recognition module, we get 32 ​​bytes per recognition module. If we take into account that the weight of each input signal is one byte, we get 40 bytes. Add 32 bytes for downstream contacts and we get 72 bytes. I note that the presence of ascending and descending branches leads to the fact that the number of signals is much more than eight, even if we take into account that many recognition modules use a common highly branched system of connections. For example, recognizing the letter “p” may involve hundreds of recognition modules. This means that thousands of next-level recognition modules are involved in recognizing words and phrases containing the letter “p”. However, each module responsible for recognizing “p” does not repeat this tree of connections that feed all levels of recognition of words and phrases with “p”; all these modules have a common tree of connections.

The above is also true for downstream signals: the module responsible for recognizing the word apple will tell all the thousand downstream modules responsible for recognizing “e” that the image “e” is expected if “a”, “p”, “p” are already recognized " and "l". This tree of connections is not repeated for each word or phrase recognition module that wants to inform lower level modules that the image "e" is expected. This tree is common. For this reason, an average estimate of eight upstream and eight downstream signals for each recognition module is quite reasonable. But even if we increase this value, it will not change the final result much.

So, taking into account 3 x 10 8 (300 million) recognition modules and 72 bytes of memory for each, we find that the total memory size should be about 2 x 10 10 (20 billion) bytes. And this is a very modest value. Conventional modern computers have this kind of memory.

We performed all these calculations to roughly estimate the parameters. Considering that digital circuits are approximately 10 million times faster than networks neurons in the biological cortex, we do not need to reproduce the massive parallelism of the human brain - very moderate parallel processing (compared to trillions of parallelism in the brain) will be quite enough. Thus, the necessary computational parameters are quite achievable. The ability of brain neurons to reconnect (remember that dendrites are constantly creating new synapses) can also be simulated using appropriate software, since computer programs are much more plastic than biological systems, which, as we have seen, are impressive but have limits.

The brain redundancy required to obtain invariant results can certainly be reproduced in a computer version. The mathematical principles for optimizing such self-organizing hierarchical learning systems are quite clear. The organization of the brain is far from optimal. But it doesn't have to be optimal - it has to be good enough to enable the creation of tools that compensate for its own limitations.

Another limitation of the neocortex is that it has no mechanism for eliminating or even evaluating conflicting data; This partly explains the very common illogicality of human reasoning. To solve this problem we have a very weak ability called critical thinking, but people use it much less often than they should. The computer neocortex could include a process that identifies conflicting data for subsequent revision.

It is important to note that designing an entire brain region is easier than designing a single neuron. As has already been said, more high level model hierarchies are often simplified (there is an analogy with a computer here). Understanding how a transistor works requires a detailed understanding of the physics of semiconductor materials, and the functions of a single real-life transistor are described by complex equations. A digital circuit that multiplies two numbers contains hundreds of transistors, but one or two formulas are enough to create a model of such a circuit. An entire computer, consisting of billions of transistors, can be modeled using a set of instructions and a register description on several pages of text using a few formulas. Programs for operating systems, language compilers, or assemblers are quite complex, but modeling private program(for example, language recognition programs based on hidden hierarchical Markov models) also comes down to a few pages of formulas. And nowhere in similar programs you will not find detailed descriptions of the physical properties of semiconductors or even computer architecture.

A similar principle is true for brain modeling. One particular neocortical recognition module that detects certain invariant visual images(such as faces), filters audio frequencies (limiting the input signal to a certain frequency range), or estimates the temporal proximity of two events, can be described in far fewer specific details than the actual physical and chemical interactions that control the functions of neurotransmitters, ion channels, and other elements of neurons involved in the transmission of nerve impulses. Although all these details must be carefully considered before switching to next level complexity, when modeling the operating principles of the brain, much can be simplified.

<<< Назад
Forward >>>

An organ that coordinates and regulates all vital functions of the body and controls behavior. All our thoughts, feelings, sensations, desires and movements are associated with the work of the brain, and if it does not function, the person goes into a vegetative state: the ability to perform any actions, sensations or reactions to external influences is lost.

Computer model of the brain

The University of Manchester has begun building the first of a new type of computer, the design of which imitates the structure of the human brain, BBC reports. The cost of the model will be 1 million pounds.

A computer built on biological principles, says Professor Steve Furber, should demonstrate significant stability in operation. “Our brain continues to function despite the constant failure of the neurons that make up our nervous tissue,” says Furber. “This property is of great interest to designers who are interested in making computers more reliable.”

Brain Interfaces

In order to lift a glass several feet using mental energy alone, wizards had to train for several hours a day.
Otherwise, the lever principle could easily squeeze the brain out through the ears.

Terry Pratchett, "The Color of Magic"

Obviously, the crowning glory of the human-machine interface should be the ability to control a machine with thought alone. And getting data directly into the brain is already the pinnacle of what virtual reality can achieve. This idea is not new and has been featured in a wide variety of science fiction literature for many years. Here are almost all cyberpunks with direct connections to cyberdecks and biosoftware. And control of any technology using a standard brain connector (for example, Samuel Delany in the novel “Nova”), and a lot of other interesting things. But fiction is good, but what is being done in the real world?

It turns out that the development of brain interfaces (BCI or BMI - brain-computer interface and brain-machine interface) is in full swing, although few people know about it. Of course, the successes are very far from what is written about in science fiction novels, but, nevertheless, they are quite noticeable. Currently, work on brain and nerve interfaces is mainly being carried out as part of the creation of various prosthetics and devices to make life easier for partially or completely paralyzed people. All projects can be divided into interfaces for input (restoration or replacement of damaged sensory organs) and output (control of prostheses and other devices).

In all cases of direct data input, it is necessary to perform surgery to implant electrodes into the brain or nerves. In case of output, you can get by with external sensors for taking an electroencephalogram (EEG). However, EEG is a rather unreliable tool, since the skull greatly weakens brain currents and only very generalized information can be obtained. If electrodes are implanted, data can be taken directly from the desired brain centers (for example, motor centers). But such an operation is a serious matter, so for now experiments are being conducted only on animals.

In fact, humanity has long had such a “single” computer. According to Wired magazine co-founder Kevin Kelly, millions of Internet-connected PCs, mobile phones, PDAs and other digital devices can be considered components of a single computer. Its central processor is all the processors of all connected devices, its hard drive is the hard drives and flash drives of the whole world, and RAM- the total memory of all computers. Every second, this computer processes an amount of data equal to all the information contained in the Library of Congress, and its operating system is the World Wide Web.

Instead of nerve cell synapses, it uses functionally similar hyperlinks. Both are responsible for creating associations between nodes. Each unit of thought, such as an idea, grows as more and more connections are made with other thoughts. Also online: large quantity references to a specific resource (nodal point) mean its greater significance for the Computer as a whole. Moreover, the number of hyperlinks on the World Wide Web is very close to the number of synapses in the human brain. Kelly estimates that by 2040, the planetary computer will have computing power commensurate with the collective brain power of all 7 billion people who will inhabit the Earth by that time.

But what about the human brain itself? A long-outdated biological mechanism. Our gray matter runs at the speed of the very first Pentium processor, from 1993. In other words, our brain operates at a frequency of 70 MHz. In addition, our brains operate on an analogue principle, so what about comparison with digital method data processing is out of the question. This is the main difference between synapses and hyperlinks: synapses, reacting to their environment and incoming information, skillfully change the organism, which never has two identical states. The hyperlink, on the other hand, is always the same, otherwise problems begin.

However, it must be admitted that our brain is significantly more efficient than any artificial system created by people. In a completely mysterious way, all the gigantic computing abilities of the brain are located in our skull, weighs just over a kilogram, and at the same time it requires only 20 watts of energy to function. Compare these figures with the 377 billion Watts that, according to approximate calculations, are now consumed by a Single Computer. This, by the way, is as much as 5% of global electricity production.

The mere fact of such monstrous energy consumption will never allow the Unified Computer to even come close to the efficiency of the human brain. Even in 2040, when the computing power of computers becomes sky-high, their energy consumption will continue to increase.

Ecology of consciousness. Science and Discovery: No matter how hard they try, neuroscientists and cognitive psychologists will never find a copy of Beethoven's fifth symphony in the brain, or a copy of words, images, grammatical rules, or any other external stimuli. The human brain is, of course, not literally empty. But it doesn't contain most of the things people think it should - it doesn't even contain simple objects like "memories".

No matter how hard they try, neuroscientists and cognitive psychologists will never find a copy of Beethoven's fifth symphony in the brain, or a copy of words, images, grammatical rules, or any other external stimuli. The human brain is, of course, not literally empty. But it doesn't contain most of the things people think it should - it doesn't even contain simple objects like "memories".

Our misconceptions about the brain have deep historical roots, but the invention of the computer in the 1940s has particularly confused us. For more than half a century, psychologists, linguists, neurophysiologists and other researchers of human behavior have been saying: the human brain works like a computer.

To understand the superficiality of this idea, let's imagine that the brain is a baby.Thanks to evolution, newborn humans, like newborns of any other mammal species, enter this world ready to interact effectively with it. The baby's vision is blurry, but he pays special attention to faces and can quickly recognize his mother's face among others. He prefers the sound of the voice to other sounds, and he can distinguish one basic speech sound from another. We are, without a doubt, built with social interaction in mind.

A healthy newborn has more than a dozen reflexes - ready reactions to certain stimuli; they are needed for survival. The baby turns its head in the direction of whatever is tickling its cheek and sucks whatever comes into its mouth. He holds his breath as he plunges into the water. He grabs things that fall into his hands so tightly that he almost hangs on them.

Perhaps most importantly, infants come into the world with very powerful learning mechanisms that allow them to rapidly change so that they can interact with the world with increasing effectiveness, even if that world is not similar to the one they encountered. their distant ancestors.

Feelings, reflexes and learning mechanisms are all what we start with, and truth be told, there are quite a lot of these things if you think about it. If we didn't have one of these capabilities from birth, we would have a much harder time surviving.

But there is also something we were not born with: information, data, rules, software, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processing, subroutines, encoders and decoders, symbols and buffers - design elements that allow digital computers behave in a manner that somewhat resembles reason. We are not just not born with it, we do not develop it in ourselves. Never.

We don't keep words or rules telling us how to use them. We do not create visual projections of stimuli, store them in a short-term memory buffer, and then transfer them to long-term memory storage. We do not extract information or images and words from memory registers. This is what computers do, but not organisms.

Computers literally process information.- numbers, letters, words, formulas, images. Information must initially be encoded into a format that computers can use, which means it must be represented in the form of ones and zeros (“bits”), which are collected into small blocks (“bytes”). On my computer, where each byte contains 8 bits, some of them represent the letter "K", others - "O", others - "T". Thus, all these bytes form the word “CAT”. A single image—say, a photo of my cat Henry on my desktop—is represented by a special pattern of a million of these bytes (“one megabyte”), defined by special characters that tell the computer that it is a photograph and not a word.

Computers literally move these designs from place to place in various physical storage compartments allocated within electronic components. Sometimes they copy drawings, and sometimes they change them in a variety of ways - say, when we correct an error in a document or retouch a photograph.

The rules that the computer follows to move, copy, or manipulate these layers of data are also stored within the computer. The collections of rules put together are called “programs” or “algorithms.” A group of algorithms that work together to help us do something (like buying a stock or searching for data online) is called an “application.”

Please forgive me for this introduction to the world of computers, but I need to make this very clear to you: computers actually work on the symbolic side of our world. They do store and retrieve. They really process. They do have physical memories. They truly are driven by algorithms in everything they do, without exception.

On the other hand, people don’t do that - they never did and never will. Given this, I would like to ask: why do so many scientists talk about our mental health as if we were computers?

In his book In Our Own Image (2015), artificial intelligence expert George Zarkadakis describes six different metaphors that people have used over the past two millennia, trying to describe human intelligence.

In the very first, the biblical one, people were created from clay and mud, which the intelligent God then endowed with his soul, “explaining” our intelligence - at least grammatically.

Invention of hydraulic technology in the 3rd century BC. led to the popularization of hydraulic models of human intelligence, the idea that the various fluids of our body - the so-called. "bodily fluids" - relate to both physical and mental functioning. The metaphor has been preserved for more than 16 centuries and has been used in medical practice all this time.

By the 16th century, automatic mechanisms driven by springs and gears had been developed; they finally inspired leading thinkers of the time, such as René Descartes, to hypothesize that humans are complex machines.

In the 17th century, British philosopher Thomas Hobbes proposed that thinking arose from mechanical vibrations in the brain. By the early 18th century, discoveries in the fields of electricity and chemistry led to new theories of human intelligence - and these, again, were metaphorical in nature. In the middle of the same century, German physicist Hermann von Helmholtz, inspired by advances in communications, compared the brain to a telegraph.

If this metaphor is so stupid, why does it still rule our minds? What keeps us from throwing it aside as unnecessary, just as we throw away a branch that blocks our path? Is there a way to understand human intelligence without relying on imaginary crutches? And at what cost will it cost us to use this support for so long? This metaphor eventually inspired writers and thinkers to great amount research in various fields of science for decades. At what cost?

In the classroom for a class I've taught many times over the years, I I start by choosing a volunteer who I tell to draw a one dollar bill on the board.“More details,” I say. When he finishes, I cover the drawing with a piece of paper, take a bill from his wallet, stick it to the board and ask the student to repeat the task. When he or she finishes, I remove the paper from the first drawing and then the class comments on the differences.

You may have never seen a demonstration like this before, or perhaps you may have trouble visualizing the results, so I asked Jeannie Hyun, one of the interns at the institute where I do my research, to make two drawings. Here is a drawing “from memory” (note the metaphor):

And here is a drawing she made using a banknote:

Ginny was just as surprised by the outcome of the case as you probably were, but that's not unusual. As you can see, the drawing done without reference to the bill is terrible compared to the one copied from the sample, despite the fact that Ginny has seen the dollar bill thousands of times.

So what's up? Don't we have an "idea" of what a dollar bill looks like "downloaded" into our brain's "memory register"? Can't we simply “extract” it from there and use it when creating our drawing?

Of course not, and even thousands of years of neuroscience research would not reveal the idea of ​​a dollar bill stored in the human brain, simply because it is not there.

A significant body of brain research shows that, in fact, numerous and sometimes extensive areas of the brain are often involved in seemingly trivial memory tasks.

When a person experiences strong emotions, millions of neurons in the brain can fire. In 2016, University of Toronto neuroscientist Brian Levin and colleagues conducted a study of plane crash survivors that concluded that the events of the crash contributed to an increase in neural activity in the “amygdala, medial temporal lobe, anterior and posterior midline , as well as in the visual cortex of passengers.”

The idea, put forward by a number of scientists, that specific memories are somehow stored in individual neurons is absurd; if anything, this assumption only raises the question of memory to an even more complex level: how and where, ultimately, is memory recorded in the cell?

So what happens when Ginny draws a dollar bill without using a reference? If Ginny has never seen a bill before, her first drawing will probably look nothing like her second. The fact that she had seen dollar bills before somehow changed her. In fact, her brain was altered so that she could visualize the bill - which is essentially equivalent, at least in part, to reliving the sensation of making eye contact with the bill.

The difference between the two sketches reminds us that visualizing something (which is the process of recreating eye contact with something that is no longer in front of our eyes) is much less accurate than actually seeing it. This is why we are much better at recognizing than remembering.

When we re-produce something in memory(From Latin re - “again”, and produce - “to create”), we must try to relive the encounter with an object or phenomenon; however, when we learn something, we just have to be aware of the fact that we have previously had the experience of subjective perception of this object or phenomenon.

Perhaps you have something to object to this evidence. Ginny had seen dollar bills before, but she didn't make a conscious effort to "remember" the details. You might argue that if she had done this, she might have been able to draw the second image without using the dollar bill sample. However, even so, no image of the banknote was in any way "stored" in Ginny's brain. She simply became more prepared to draw it in detail, just as a pianist becomes more adept at playing piano concertos through practice without having to download a copy of the sheet music.

From this simple experiment, we can begin to build the basis of a metaphor-free theory of intelligent human behavior - one in which the brain is not completely empty, but at least free of the burden of IP metaphors.

As we move through life, we are exposed to many things that happen to us. Three types of experience are particularly noteworthy: 1) We watch what is happening around us(how other people behave, sounds of music, instructions addressed to us, words on pages, images on screens); 2) We are susceptible to a combination of minor stimuli(for example, sirens) and important incentives(appearance of police cars); 3) We are punished or rewarded for behaving in a certain way. az.

We become more effective when we change in response to these experiences.- if we can now recite a poem or sing a song, if we are able to follow directions given to us, if we respond to minor stimuli as well as important ones, if we try not to behave in such a way that we will be punished, and if we behave this way more often way to receive a reward.

Despite the misleading headlines, no one has any idea what changes happen in the brain after we learn to sing a song or learn a poem. However, neither the songs nor the poems were “downloaded” into our brains. It has simply changed in an orderly manner so that we can now sing a song or recite a poem if certain conditions are met.

When we are asked to perform, neither the song nor the poem is "retrieved" from some place in the brain- in the same way that the movements of my fingers are not “extracted” when I drum on the table. We just sing or talk - and we don’t need any extraction.

A few years ago, I asked Eric Kandel—a neuroscientist at Columbia University who won a Nobel Prize for identifying some of the chemical changes that occur at the neutron output synapses of an Aplysia (sea snail) after it learns something—how long it takes. in his opinion, it will take time before we understand the mechanism of functioning human memory. He quickly replied, “A hundred years.” I didn't think to ask him if he thought the IP metaphor was slowing neurological progress, but some neuroscientists are indeed beginning to think the unthinkable, namely, that the IP metaphor isn't really necessary.

A number of cognitive scientists—notably Anthony Chemero of the University of Cincinnati, author of the 2009 book Radical Embodied Cognitive Science—now completely reject the notion that the human brain operates like a computer. The common belief is that we, like computers, make sense of the world by performing calculations on our mental images, but Chemero and other scientists describe a different way of understanding the thought process - they define it as the direct interaction between organisms and their world.

My favorite example illustrating the huge difference between the IP approach and what some call the "anti-representational" view of functioning human body, includes two different explanations of how a baseball player catches a flying ball, given by Michael McBeath, now at Arizona State University, and his colleagues, in a paper published in 1995 in Science.

According to the IP approach, the player must formulate a rough estimate of the various initial conditions of the ball's flight—impact force, trajectory angle, and so on—and then create and analyze an internal model of the trajectory that the ball is likely to follow, after which it must exploit this model to continuously guide and timely correct movements aimed at intercepting the ball.

This would be all well and good if we functioned just like computers, but McBeath and his colleagues have a simpler explanation: to catch the ball, the player just needs to keep moving in a way that maintains a constant visual connection to the home base and the surroundings. space (technically, stick to the “linear-optical trajectory”). This may seem complicated, but in fact it is extremely simple and does not involve any calculations, representations or algorithms.

Two dedicated psychology professors at Leeds City University in the UK, Andrew Wilson and Sabrina Golonka, list the baseball example among others that can be understood outside of the IP approach. Over the years, they have blogged about what they call "a more coherent, naturalized approach to the scientific study of human behavior... going against the dominant cognitive neuroscience approach."

However, this approach is far from being the basis of a separate movement; most cognitive scientists still refuse to criticize and cling to the IP metaphor, and some of the world's most influential thinkers have made grand predictions about the future of humanity that depend on the validity of the metaphor.

One of the predictions- made by futurist Kurzweil, physicist Stephen Hawking and neuroscientist Randall Cohen, among others - states that since human consciousness is supposed to act like computer programs, it will soon be possible to upload the human mind into a machine, whereby we will have infinitely powerful intelligence and, quite possibly, we will acquire immortality. This theory formed the basis of the dystopian film Transcendence, starring Johnny Depp as a Kurzweil-like scientist whose mind was uploaded to the Internet - with horrific consequences for humanity.

Fortunately, since the IP metaphor is in no way true, we will never have to worry about the human mind going mad in cyberspace, and we will never be able to achieve immortality by uploading it somewhere. The reason for this is not only the lack of conscious software in the brain; the problem is deeper - let's call it the problem of uniqueness - which sounds both inspiring and depressing.

Since neither "memory banks" nor "representations" of stimuli exist in the brain, and since all that is required of us to function in the world is for the brain to change as a result of our experiences, there is no reason to believe that one and the same experience changes each of us to the same extent. If you and I attend the same concert, the changes that occur in my brain when listening to Beethoven's Symphony No. 5 will almost certainly be different from those that occur in your brain. These changes, whatever they are, are created based on the unique neural structure that already exists, each of which has evolved throughout your life of unique experiences.

As Sir Frederick Bartlett showed in his book Remembering (1932), this is why no two people will ever repeat a story they have heard in the same way, and over time their stories will become more and more different from each other.

No “copy” of history is created; rather, each individual, upon hearing the story, is changed to some degree - enough so that when later asked about the story (in some cases, days, months, or even years after Bartlett first read the story to them) - they will be able to relive to some extent the moments when they listened to the story, although not very accurately (see the first image of the dollar bill above.).

I suppose this is inspiring because it means that each of us is truly unique - not just in our genetic code, but even in the way our brains change over time. It's also depressing because it makes the grand challenge of neuroscience seem almost beyond imagination. For each of our daily experiences, the orderly change may involve thousands, millions of neurons, or even the entire brain, since the process of change is different for each individual brain.

What's worse, even if we had the ability to take a snapshot of all 86 billion neurons in the brain and then simulate the state of those neurons using a computer, this extensive template would not apply to anything outside the brain in which it was originally created.

This is perhaps the most monstrous effect that the IP metaphor has had on our understanding of the functioning of the human body. While computers do store exact copies of information—copies that can remain unchanged for a long time, even if the computer itself has been de-energized—our brains only maintain intelligence while we are alive. We don't have on/off buttons.

Either the brain continues its activity, or we disappear. Moreover, as neuroscientist Stephen Rose noted in his 2005 book The Future of the Brain, a snapshot of a brain's current state may also be meaningless unless we know the full life history of that brain's owner—perhaps even details of the social setting, in particular. where he or she grew up.

Think about how complex this problem is. To understand even the basics of how the brain supports human intelligence, we may need to understand not only the current state of all 86 billion neurons and their 100 trillion intersections, not only the varying strengths with which they are connected, but also how moment-to-moment brain activity supports integrity of the system.

Add here the uniqueness of each brain, created in part by the uniqueness of each person's life path, and Kandel's prediction begins to seem overly optimistic. (In the recently released editing the Orsk column The New York Times neuroscientist Kenneth Miller suggested that the task of even figuring out basic neural connections would take "centuries.")

Meanwhile, huge sums of money are poured into brain research based on often flawed ideas and unfulfilled promises. The most egregious case of neurological research gone awry was documented in the recently released Sci entific American report . We were talking about the amount of $1.3 billion allocated to the Human Brain Project launched by the European Union in 2013.

Convinced by the charismatic Henry Markram that he could create a supercomputer simulation of the human brain by 2023, and that such a model would provide breakthroughs in the treatment of Alzheimer's disease and other disorders, EU authorities funded the project with literally no restrictions. After less than 2 years, the project turned into a brain dump, and Markram was asked to leave his post.

This might interest you:

We are living organisms, not computers. Deal with it. Let's continue to try to understand ourselves, but let's get rid of unnecessary intellectual baggage. The IP metaphor lasted for half a century, bringing a meager number of discoveries. It's time to press the DELETE button. published

Translation: Vlada Olshanskaya and Denis Pronin.

Imagine an experimental nanodrug that can bind minds different people. Imagine a group of enterprising neuroscientists and engineers discovering new way The use of this drug is to run an operating system directly inside the brain. Then people will be able to telepathically communicate with each other using mental chat, and even manipulate the bodies of other people, subordinating the actions of their brains. And despite the fact that this is the plot of Ramez Naam's science fiction book "Nexus", the future of technology described by him no longer seems so distant.

IDEA IN BRIEF

Based on the following three technology projects and crazy research ideas, we can see that we are already one foot into a future where paralyzed patients will be able to contact the outside world, where brain memory can be expanded by adding implants, and a computer chip will run on living neurons human brain.

How to connect your brain to a tablet and help paralyzed patients communicate

For patient T6, 2014 was the happiest year of his life. This was the year she was able to manage tablet computer Nexus using electromagnetic radiation your brain and literally be transported from the era of the 1980s with their disk-centric systems (Disk Operating System, DOS) in the new age of Android OS.

T6 is a 50-year-old woman with amyotrophic lateral sclerosis, also known as Lou Gehrig's disease, which causes progressive damage to motor neurons and paralysis of all organs of the body. T6 is paralyzed almost completely from the neck down. Until 2014, she was absolutely unable to interact with the outside world.

Paralysis can also occur from bone marrow damage, stroke, or neurodegenerative diseases that block the ability to speak, write, or generally communicate with others.

The era of brain-machine interfaces blossomed two decades ago with the creation of assistive devices that would help such patients. The result was fantastic: eye-tracking and head-tracking made it possible to track eye movements and use them as output to control the mouse cursor on a computer screen. Sometimes the user could even click on a link, fixing his gaze on one point on the screen. This is called "lag time".

However, eye-tracking systems have been hard on the user's eyes and too expensive. Then the technology of neural prosthetics appeared, when the intermediary in the form of a sensory organ was eliminated and the brain communicated directly with the computer. A microchip is implanted in the patient's brain, and neural signals associated with desire or intention can be decoded through complex algorithms in real time and used to control a cursor on a computer interface.

Two years ago, patient T6 was implanted in left side brain, responsible for movement, 100-channel electrode installation. At the same time, the Stanford lab was working to create a prototype prosthesis that would allow paraplegics to type words on a specially designed keyboard simply by thinking about those words. The device worked as follows: electrodes built into the brain recorded the patient’s brain activity at the moment when she looked at the desired letter on the screen, transmitted this information to a neuroprosthesis, which then interpreted the signals and turned them into continuous control of the cursor and clicks on the screen.

However, this process was extremely slow. It became clear that the output would be a device that would operate without a direct physical connection to the computer through electrodes. The interface itself also had to look more interesting than in the 80s. The BrainGate Clinical Institute team behind this research realized that their point-and-click system was similar to pressing a finger on a touch screen. And since touch tablets Most of us use them every day, the market for them is huge. You just need to choose and buy any of them.

Paralyzed patient T6 was able to “press” the screen of a Nexus 9 tablet. The neuroprosthesis communicated with the tablet via bluetooth protocol, that is, like a wireless mouse.

The team is now working to extend the implant's lifespan, as well as developing systems for other motor maneuvers such as select-and-drag and multisensory movements. In addition, BrainGate plans to expand its program to other operating systems.

Computer chip made from living brain cells

A few years ago, researchers from Germany and Japan were able to simulate 1 percent of human brain activity in one second. This was only possible thanks to the computing power of one of the world's most powerful supercomputers.

But the human brain still remains the most powerful, low-energy and efficient computer. What if it were possible to use the power of this computer to power future generations of machines?

As crazy as it may sound, neuroscientist Osh Agabi launched the Koniku project precisely to realize this goal. He created a prototype of a 64-neuron silicon chip. The first application of this development was a drone that can “smell” the smell of explosives.

Bees have one of the most sensitive olfactory abilities. In fact, they even move through space by smell. Agabi has created a drone that rivals bees' ability to recognize and interpret odors. It can be used not only for military purposes and bomb detection, but also for surveying farmland, oil refineries - all places where health and safety levels can be measured by smell.

During the development process, Agabi and his team solved three main problems: structure neurons the same way they are structured in the brain, read and write information to each individual neuron, and create a stable environment.

Induced pluripotent cell differentiation technology - a method where a mature cell, such as skin, is genetically integrated into the original stem cell, allows any cell to develop into a neuron. But like anyone electronic components, living neurons need a special habitat.

Therefore, the neurons were placed in shells with a controlled environment to regulate the temperature and hydrogen levels inside, as well as to supply them with power. In addition, such a shell allows you to control the interaction of neurons with each other.

Electrodes under the shell allow information to be read or written to neurons. Agabi describes this process as follows:

“We encase the electrodes in a coating of DNA and enriched proteins that stimulates neurons to form artificially tight connections with these conductors. So, we can read information from neurons or, conversely, send information to neurons in the same way or through light or chemical processes.”

Agabi believes that the future of technology lies in unlocking the capabilities of the so-called wetware - the human brain in correlation with the machine process.

“There are no practical limits to how big we will make our future devices or how differently we can model the brain. Biology is the only limit."

Koniku's future plans will include the development of chips:

  • with 500 neurons, which will control a car without a driver;
  • with 10,000 neurons - will be able to process and recognize images as the human eye does;
  • with 100,000 neurons - will create a robot with multisensory input, which will be practically indistinguishable from a human in terms of perceptual properties;
  • with a million neurons - will give us a computer that will think for itself.

Memory chip embedded in the brain

Every year, hundreds of millions of people experience difficulties due to memory loss. The reasons for this vary: the brain damage that plagues veterans and football players, the strokes or Alzheimer's disease that manifest themselves in old age, or simply the aging of the brain that awaits us all. Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-enhancing implant that mimics signal processing when neurons fail to process new long-term memories, funded by the Defense Advanced Research Projects Agency (DARPA).

For the device to work, scientists must understand how memory works. The hippocampus is the area of ​​the brain that is responsible for transforming short-term memories into long-term ones. How he does it? And is it possible to simulate its activities within a computer chip?

“Essentially, memory is a series of electrical impulses that occur over time and that are generated by a specific number of neurons,” explains Berger. “This is very important because it means we can reduce this process to a mathematical equation and put it into the framework of the computational process."

Thus, neuroscientists have begun to decode the flow of information within the hippocampus. The key to this decipherment was a strong electrical signal that goes from an organ area called CA3, the “input” of the hippocampus, to CA1, the “output” node. This signal is weakened in people with memory disorders.

“If we could recreate it using a chip, we could restore or even increase the memory capacity,” says Berger.

But it is difficult to trace this decryption path, since neurons work nonlinearly. And any minor factor involved in the process can lead to completely different results. However, mathematics and programming do not stand still, and today together they can create the most complex computational structures with many unknowns and many “outputs”.

To begin with, scientists trained rats to press one or another lever to get a treat. As the rats memorized and turned this memory into a long-term one, the researchers carefully recorded and recorded all the transformations of neurons, and then created a computer chip using this mathematical model. Next, they injected the rats with a substance that temporarily destabilized their ability to remember and inserted a chip into the brain. The device affected the “exit” organ of CA1, and, suddenly, the scientists found that the rats’ memory of how to get a treat was restored.

The following tests were carried out on monkeys. This time, the scientists focused on the prefrontal cortex, which receives and modulates memories received from the hippocampus. The animals were shown a series of images, some of which were repeated. By recording the activity of neurons at the moment they recognized the same picture, it was created mathematical model and a microcircuit based on it. After this, the work of the prefrontal cortex of the monkeys was suppressed with cocaine and the scientists were again able to restore memory.

When the experiments were carried out on humans, Berger selected 12 volunteers with epilepsy with electrodes already implanted in their brains to trace the source of their seizures. Repeated seizures destroy key parts of the hippocampus needed to form long-term memories. If, for example, we study brain activity during seizures, it may be possible to recover memories.

Just as in previous experiments, a special human “memory code” was captured that could subsequently predict the pattern of activity in CA1 cells based on data stored or originating in CA3. Compared to “real” brain activity, such a chip works with about 80% accuracy.

It is too early to talk about concrete results after experiments on humans. Unlike the motor cortex of the brain, where each section is responsible for a specific organ, the hippocampus is organized chaotically. It is also too early to say whether such an implant will be able to restore memory to those who suffer from damage to the “exiting” part of the hippocampus.

The issue of generalizing the algorithm for such a chip remains problematic, since the experimental prototype was created on individual data from specific patients. What if the memory code is different for everyone, depending on the type of incoming data it receives? Berger reminds us that the brain is also limited by its biophysics:

“There are only so many ways in which electrical signals in the hippocampus can be processed, which, while numerous, are nevertheless limited and finite,” says the scientist.

Anastasia Lvova

Prostheses that are controlled by the power of thought, direct communication with computers without the help of muscles, and in the future - an artificial body for a paralyzed person and training of cognitive functions - thinking, memory and attention. All this is beyond the realm of science fiction. The time for neuroscience has already come, says Sergei Shishkin, candidate of biological sciences, head of the department of neurocognitive technologies at the National Research Center “Kurchatov Institute”. He spoke about the latest results of brain research at the Sirius Educational Center. Lenta.ru provides the main points of his speech.

First steps in terra incognita

The results of physical research underlie everything that surrounds us. Whatever we look at - buildings, clothes, computers, smartphones - all of this is somehow connected with technologies based on the laws of physics. But the contribution of brain science to our lives is incomparably smaller.

Why? Until recently, neuroscience developed very slowly. In the mid-19th century, they were just beginning to understand that the brain consists of nerve cells - neurons, but then they were extremely difficult to see and isolate. Modern researchers have found ways to study neurons more deeply and monitor their work - for example, they inject fluorescent dyes that glow when the cell is activated.

New methods make it possible to observe the functioning of the human brain without surgical intervention using nuclear magnetic resonance technology. We are beginning to better understand the structure of the brain and create new technologies based on this knowledge. One of the most impressive is the brain-computer interface.

Brain-computer interface

This technology allows you to control a computer with the power of thought; more precisely, it is called “technology for transmitting commands from the brain to the computer without the help of muscles and peripheral nerves” (this is the definition adopted in the scientific literature). The main purpose of brain-computer interfaces is to help disabled people, primarily those people whose muscles or their control system do not work. This can be caused by various reasons - for example, a car accident when a person's spinal cord is severed.

Is it necessary healthy person additional channel connection with a computer? Some scientists believe that such an interface can greatly speed up the work with computer technology, because a person will not be “slowed down” by his hands: - he will directly send information to the computer. There is also a more realistic assumption: with the help of these interfaces you can train the cognitive functions of the brain - thinking, memory, attention... How can one not recall the film “The Lawnmower Man”, where the main character, using virtual reality, “pumped up” his brain so much that he actually became a superman.

At the heart of these desires is the dream of expanding the capabilities of the brain. This is understandable: we are almost always dissatisfied with the opportunities we have. The dream of expanding the capabilities of the brain suggests to scientists a seemingly fantastic, but increasingly real direction of work: to try to connect the brain and the computer as closely as possible. After all, computer programs have a big drawback - almost everything in them is built on strict rules, while a person’s intuition works, although he cannot calculate options almost instantly. So this combination of the strengths of the brain and the computer would be very useful.

Practical problems

But first of all, neuroscience faces very practical tasks. For example, help people with a disease called amyotrophic lateral sclerosis. There are few patients with this diagnosis, but it is a very serious disease. The patient can think completely normally and perceive information from the outside world, but is unable to move or even say anything. Unfortunately, this disease remains incurable, and patients cannot communicate with others for the rest of their lives.

The first attempts to create a “brain-computer” interface were made back in the 1960s, but serious interest in this technology arose only after the German scientist Niels Birbaumer and his colleagues developed the so-called “thought transfer device” in the late 1990s. and began to teach paralyzed patients how to use it.

Some patients, thanks to this device, were able to communicate with relatives and researchers. One of them wrote a long letter using a “thought transfer device” in which he explained how he typed letters. This text, which the patient wrote over six months, was published in one of the scientific journals.

Working with the Birbaumer system cannot be called simple. The patient must first select one of the halves of the alphabet shown on the screen, changing the electrical potentials coming from the brain either in a positive or negative direction. Thus, he seems to mentally say “yes” or “no”. The electrical potential is recorded directly on the surface of the scalp, fed into a computer, and it determines which half of the alphabet should be selected. Then the person goes deeper through the alphabet and selects a specific letter. This is inconvenient and time-consuming, but the method does not require implantation of electrodes into the brain.

Invasive methods, where electrodes are inserted directly into the brain, are more successful. The impetus for the development of this direction was given by the war in Iraq. Many military personnel then became disabled, and American scientists tried to figure out how such people could control mechanical prostheses using the brain-computer interface. The first experiments were carried out on monkeys, and then electrodes were implanted into paralyzed people. As a result, the person was able to actively participate in the process of mastering the technique of controlling the prosthesis.

In 2012, Andrew Schwartz's team from Pittsburgh managed to train a paralyzed woman to control a mechanical arm so precisely that she was able to use it to grasp various objects and even shake the hand of a popular TV presenter. television program. True, not all movements were performed flawlessly, but, of course, the system is being improved.

How did you manage to do this? An approach has been developed that allows the desired direction of movement to be determined on the fly using signals encoded in neurons. To do this, it is necessary to implant small electrodes into the motor cortex of the brain - they divert signals from neurons that are transmitted to the computer.

The question immediately arises: if a person moves a mechanical arm, is it possible to make a mechanical double - an avatar that will reproduce all the movements of a person? Such a mechanical body will be controlled through a brain-computer interface. There are a lot of fantasies about this, sometimes scientists even come up with some real plans. For now, serious experts treat this as science fiction, but in the distant future this is possible.

Gaze control

In the laboratory of cognitive technologies at the Kurchatov Institute, they are now working not only on “brain-computer” interfaces, but also on “eye-brain-computer” interfaces. Strictly speaking, it is not exactly a brain-computer interface because it uses the eye muscles to operate. Control by registering the direction of gaze is also very important, since there are people with disabilities with impaired motor function whose eye muscles continue to function. There are already ready-made systems with which a person can type text with their eyes.

However, problems arise beyond the typing task. For example, it is difficult to teach the interface not to give commands when a person looks at the control button just because he was thinking and stopped looking at it.

To solve this problem, the Kurchatov Institute decided to create a combined technology. Experiment participants play computer game, making moves only with short gaze pauses. During this time, researchers record electrical signals from their brains on the surface of the scalp.

It turned out that when a participant in the experiment holds his gaze to make a move, special markers appear in his brain signals that do not exist when the gaze is held for no reason. Based on these observations, the “eye-brain-computer” interface is created. Its user will only need to look at a button or link on the computer screen and want to click on it - the system will recognize this desire, and the click will happen by itself.

In the future, new methods will appear that will allow connecting the brain to a computer without the use of risky and very expensive operations. We are now seeing the emergence of these technologies and will soon be able to try them out.