Computer dementia or a new stage in the development of the human brain? Your brain is not a computer

A real war is unfolding before our eyes between rapidly developing technologies and the human brain. And now we are already hearing that the struggle between “machine and man” will not end in favor of the latter. And in the near future. How legitimate is the idea of ​​replacing an “imperfect biological computer with a more perfect electronic one” and how does the human brain differ from the human brain itself? modern device? Psychologist and certified trainer in Edward de Bono’s thinking methods Andrey Bespalov reflects.

Many people think that with the progress of technology, the need to memorize information will disappear by itself. After all, the need for mental calculation disappeared with the advent of calculators! Already now, any information can be “Googled” in a few minutes, and slogans in the style of “your brain is the most powerful computer” are losing relevance. The computer/cloud/Google can remember so much better and more than we can that there is no point in competing with them. But is our brain really a computer in our head? And why can’t even the most advanced technology compare to the work of human gray matter?

Hierarchy of memory

Let's turn to simple example. Everyone who works on a computer is well aware that a file with instructions on “how to make a table of contents in Word” looks something like this: “Indicate in the document the place where the table of contents should be inserted, open the “Links” tab, click the “Table of Contents” button” - and so on. But in my head it all happens differently. Otherwise, if a friend asked me on the phone how to make an auto table of contents, I would answer right away. But I say: “Wait, I’ll open the program now,” and only after I see Word in front of me can I remember what to do.

The whole mystery lies in the fact that, unlike files, which are written and read linearly, memories in the brain are stored hierarchically. What happens when a person sees, for example, the letter N? The image reaches the retina, and from there it goes to the primary visual cortex, which is responsible for recognizing simple images: two vertical rods, one horizontal. It transmits data about these rods to the secondary visual cortex, which puts them together into a more complex pattern (“H”) and transmits the result to the next zone, where letters received from different parts of the secondary visual cortex are combined into words and transmitted “up.”

The power of prediction

The cerebral cortex is divided into many zones through which information constantly moves, not only up the hierarchy, but also down. The human brain is so efficient, says Jeff Hawkins in his book On Intelligence, that it can predict future events based on experiences stored in memory. In order to produce specific action(for example, catching a ball), the brain does not have to calculate for a long time - it just needs to remember how it acted before, and on this basis predict the flight of the ball and coordinate its movements. The circuits of neurons found in the cortex form a hierarchical structure in which higher levels constantly send information to lower levels. This allows you to compare the incoming sequence of images with sequences from previous experience. So, based on the words “A long time ago, many years...” we can predict that the next words will be “... a long time ago.”

Hawkins compares our brains to the hierarchy of military command: "The generals at the top of the army say, 'Move the troops to Florida for the winter.'" A simple high-level command expands into more detailed commands, going down the levels of the hierarchy. And thousands of individual structures perform tens of thousands of actions, resulting in the movement of troops. Reports of what is happening are generated at each level and flow up until the general receives the final report: “The movement was successful.” The general does not go into details.

Remember all

Unlike the brain, in a computer two very different devices are responsible for “memory”: HDD (screw) and RAM (RAM). It would seem that the analogy is obvious: the screw is the cortex, and the RAM is the hippocampus. But let's take a closer look at how the system works. Initially, new information enters the hippocampus through cortical areas. If we don’t encounter this information again, the hippocampus gradually forgets it. And the more often we remember something, the stronger the connections in the cortex become until the hippocampus “transfers all authority” regarding this pattern to it. This process is called "memory consolidation" and can last up to two years. Until it is completed, it is too early to say that information is reliably stored in long-term memory.

In the dull autumn, try to remember your vacation: how you lay on the sea beach and looked at the sand. Take a closer look: you can already distinguish individual grains of sand, pebbles, and shell fragments in it. It is very doubtful that you actually remember this - at some point fantasy wedges itself into this image and helpfully provides the necessary details. But at what exact period memories and fantasies merge into a single whole is impossible to determine.

Thus, any information that returns from long-term memory to working memory is brought into line with the changed context and current tasks, and then consolidated in an updated form. And every time we remember events of the past, this is no longer a memory of the event itself, but of the last “edition” of the brain. Our memory simply does not have the option to “open the file for viewing” - any access to it implies a certain change.

Memory as art

On a computer, deleting or saving a file is the opposite, for human memory- these are two sides of the same coin.
« For our intellect, oblivion is the same important function as well as memorization, wrote William James more than a hundred years ago. — If we remembered absolutely everything, we would be in the same hopeless situation as if we remembered nothing. Recalling an event would take as much time as the event itself».

Yes, a computer may be better at retaining information, but it may not be as good at forgetting it. It is not by chance that we forget - the memory is cleared of husks (which, as in the example with sand, can be filled with imagination if necessary) and only a significant frame is preserved. And reflection helps us identify and define this framework.
This is why William James states that “the art of remembering is the art of thinking.” To remember means to connect new information with the one we already know. The more a person remembers, the easier it is for something new to remain in memory. A The best way remember something - persistent thinking about the information received.

How not to drown in a sea of ​​facts

What conclusions arise? We can only rejoice at the capabilities of our own brain. Our memory, unlike computer memory, is not just a storehouse of information, but an integral part of thinking. And this is a tremendous opportunity for development.

To replenish your knowledge, you can ask Google for any information, but to do this, you need to understand what exactly you don’t know. It's like a puzzle - when the picture around the missing piece is already assembled, it's very easy to understand what exactly needs to be found. But when all the pieces are in disarray, it's not even clear where to start. In this case, Google can only drown us in a sea of ​​facts, but not bring us any closer to understanding them. And only the brain tells you which fragments are missing. Thus, we can only regularly load ourselves with new ones, interesting tasks to keep your brain in great shape.

Despite their best efforts, neuroscientists and cognitive psychologists will never find a copy of Beethoven's Fifth Symphony in the brain, words, pictures, grammar rules or any other external signals. Of course, the human brain is not completely empty. But it doesn't contain most of the things people think it contains - even simple things like "memories".

Our misconceptions about the brain run deep. historical roots, but we were especially confused by the invention of computers in the 1940s. For half a century, psychologists, linguists, neuroscientists and other experts on human behavior have argued that the human brain works like a computer.

To get an idea of ​​how frivolous this idea is, consider the brains of babies. A healthy newborn has more than ten reflexes. He turns his head in the direction where his cheek is scratched and sucks everything that comes into his mouth. He holds his breath when immersed in water. He grabs things in his hands so tightly that he can almost support his own weight. But perhaps most importantly, newborns have powerful learning mechanisms that allow them to change quickly so they can interact more effectively with the world around them.

Feelings, reflexes and learning mechanisms are what we have from the very beginning, and when you think about it, that's quite a lot. If we lacked any of these abilities, we would probably have difficulty surviving.

But here’s what we don’t have from birth: information, data, rules, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols and buffers - elements that allow digital computers behave somewhat rationally. Not only are these things not in us from birth, they do not develop in us during life.

We don't keep words or rules that tell us how to use them. We do not create images of visual impulses, store them in a short-term memory buffer, and then transfer the images to a long-term memory device. We do not recall information, images or words from the memory register. All this is done by computers, but not by living beings.

Computers literally process information - numbers, words, formulas, images. The information must first be translated into a format that a computer can recognize, that is, into sets of ones and zeros (“bits”) collected into small blocks (“bytes”).

Computers move these sets from place to place in various areas physical memory implemented in the form of electronic components. Sometimes they copy sets and sometimes different ways transform them - say, when you correct errors in a manuscript or retouch a photograph. The rules that a computer follows when moving, copying, or working with an array of information are also stored inside the computer. A set of rules is called a "program" or "algorithm". A set of algorithms working together that we use for different purposes (for example, buying stocks or dating online) is called an “application”.

This known facts, but they need to be spelled out to be clear: computers work on a symbolic representation of the world. They do store and retrieve. They really process. They really have physical memory. They are truly driven by algorithms in every way.

However, people don’t do anything like that. So why do so many scientists talk about our mental activity as if we were computers?

In 2015, an expert on artificial intelligence George Zarkadakis has released a book, In Our Image, in which he describes six different concepts that people have used over the past two thousand years to describe the structure of human intelligence.

In the earliest version of the Bible, humans were created from clay or mud, which an intelligent God then imbued with his spirit. This spirit “describes” our mind - at least from a grammatical point of view.

The invention of hydraulics in the 3rd century BC led to the popularity of the hydraulic concept of human consciousness. The idea was that the flow of various fluids in the body - "bodily fluids" - accounted for both physical and spiritual functions. The hydraulic concept persisted for more than 1,600 years, all the while hampering the development of medicine.

By the 16th century, devices powered by springs and gears had appeared, which inspired René Descartes to argue that man is a complex machine. In the 17th century, British philosopher Thomas Hobbes proposed that thinking occurs through small mechanical movements in the brain. By the beginning of the 18th century, discoveries in the field of electricity and chemistry led to the emergence of a new theory of human thinking, again of a more metaphorical nature. In the mid-19th century, German physicist Hermann von Helmholtz, inspired by recent advances in communications, compared the brain to a telegraph.

Albrecht von Haller. Icones anatomicae

Mathematician John von Neumann stated that the function of human nervous system is "digital in the absence of evidence to the contrary", drawing parallels between the components of computer machines of the time and areas human brain.

Each concept reflects the most advanced ideas of the era that gave birth to it. As one might expect, just a few years after its inception computer technology in the 1940s, they began to argue that the brain works like a computer: the brain itself played the role of the physical carrier, and our thoughts acted as software.

This view reached its zenith in the 1958 book The Computer and the Brain, in which mathematician John von Neumann stated emphatically that the function of the human nervous system is “digital in the absence of evidence to the contrary.” Although he acknowledged that very little is known about the role of the brain in the functioning of intelligence and memory, the scientist drew parallels between the components of computer machines of that time and areas of the human brain.

Image: Shutterstock

Thanks to subsequent advances in computer technology and brain research, an ambitious interdisciplinary study of human consciousness gradually developed, based on the idea that people, like computers, are information processors. This work now includes thousands of studies, receives billions of dollars in funding, and has been the subject of numerous papers. Ray Kurzweil's 2013 book How to Create a Mind: Unraveling the Mystery of Human Thinking illustrates this point, describing the brain's "algorithms", its "information processing" techniques, and even how it superficially resembles integrated circuits in its structure.

The idea of ​​human thinking as an information processing device (IP) currently dominates in human consciousness among both ordinary people, and among scientists. But this is, in the end, just another metaphor, a fiction that we pass off as reality to explain something we don’t really understand.

The imperfect logic of the OR concept is quite easy to formulate. It is based on a fallacious syllogism with two reasonable assumptions and a wrong conclusion. Reasonable Assumption #1: All computers are capable of intelligent behavior. Reasonable Assumption #2: All computers are information processors. Incorrect conclusion: all objects capable of behaving intelligently are information processors.

If we forget about formalities, then the idea that people should be information processors just because computers are such is complete nonsense, and when the concept of AI is finally abandoned, historians will probably view it from the same point of view as now To us, the hydraulic and mechanical concepts look like nonsense.

Carry out an experiment: draw a hundred-ruble bill from memory, and then take it out of your wallet and copy it. Do you see the difference?

A drawing made in the absence of an original will certainly turn out to be terrible in comparison with a drawing made from life. Although, in fact, you have seen this bill more than one thousand times.

What is the problem? Shouldn't the "image" of the banknote be "stored" in the "storage register" of our brain? Why can't we just "refer" to this "image" and depict it on paper?

Obviously not, and thousands of years of research will not allow us to determine the location of the image of this bill in the human brain simply because it is not there.


The idea, promoted by some scientists, that individual memories are somehow stored in special neurons is absurd. Among other things, this theory takes the question of the structure of memory to an even more intractable level: how and where is memory stored in cells?

The very idea that memories are stored in individual neurons is absurd: how and where in a cell can information be stored?

We will never have to worry about the human mind running amok in cyberspace, and we will never be able to achieve immortality by downloading our soul to another medium.

One of the predictions, which was expressed in one form or another by futurist Ray Kurzweil, physicist Stephen Hawking and many others, is that if human consciousness is like a program, then technologies should soon appear that will allow it to be loaded onto a computer, thereby greatly enhancing intellectual abilities and making immortality possible. This idea formed the basis of the plot of the dystopian film Transcendence (2014), in which Johnny Depp played a scientist similar to Kurzweil. He uploaded his mind to the Internet, causing devastating consequences for humanity.

Still from the film "Supremacy"

Fortunately, the concept of OI has nothing even close to reality, so we don't have to worry about the human mind running amok in cyberspace, and sadly, we'll never be able to achieve immortality by downloading our souls to another medium. It's not just a lack of software in the brain, the problem is even deeper - let's call it the problem of uniqueness, and it is both fascinating and depressing.

Since our brain has neither “memory devices” nor “images” of external stimuli, and during the course of life the brain changes under the influence external conditions, there is no reason to believe that any two people in the world react to the same influence in the same way. If you and I attend the same concert, the changes that happen in your brain after listening will be different from the changes that happen in my brain. These changes depend on the unique structure of nerve cells, which was formed during the entire previous life.

This is why, as Frederick Bartlett wrote in his 1932 book Memory, two people hearing the same story will not be able to retell it exactly the same way, and over time their versions of the story will become less and less similar to each other.

"Superiority"

I think this is very inspiring because it means that each of us is truly unique, not only in our genes, but also in the way our brains change over time. But it's also disheartening, because it makes the already difficult work of neuroscientists almost impossible to solve. Each change can affect thousands, millions of neurons or the entire brain, and the nature of these changes is also unique in each case.

Worse, even if we could record the state of each of the brain's 86 billion neurons and simulate it all on a computer, this enormous model would be useless outside the body to which the brain belongs. This is perhaps the most annoying misconception about the human structure, which we owe to the erroneous concept of OI.

Stored on computers exact copies data. They can remain unchanged for a long time even when the power is turned off, while the brain supports our intelligence only as long as it remains alive. There is no switch. Either the brain will work without stopping, or we will not exist. Moreover, as neuroscientist Stephen Rose noted in 2005's The Future of the Brain, a copy of the brain's current state may be useless without knowing the full biography of its owner, even including the social context in which the person grew up.

Meanwhile, huge amounts of money are spent on brain research based on false ideas and promises that will not be fulfilled. Thus, the European Union launched a project to study the human brain worth $1.3 billion. European authorities believed the tempting promises of Henry Markram to create a working simulator of brain function based on a supercomputer by 2023, which would radically change the approach to the treatment of Alzheimer's disease and other ailments, and provided the project with almost unlimited funding. Less than two years after the project launched, it turned out to be a failure, and Markram was asked to resign.

People are living organisms, not computers. Accept it. We need to continue the hard work of understanding ourselves, but not waste time with unnecessary intellectual baggage. Over the half-century of its existence, the concept of OR has given us only a few useful discoveries. It's time .

  • Translation

We all remember painful arithmetic exercises from school. It takes at least a minute to multiply numbers like 3,752 and 6,901 using pencil and paper. Of course, today, with our phones at our fingertips, we can quickly check that the result of our exercise should be 25,892,552. Modern phone processors can perform more than 100 billion of these operations per second. Moreover, these chips consume only a few watts, making them much more efficient than our slow brains, which consume 20 watts and take much longer to achieve the same result.

Of course, the brain did not evolve to do arithmetic. That's why he doesn't do it well. But it does an excellent job of processing the constant flow of information coming from our environment. And he reacts to it - sometimes faster than we can realize it. And no matter how much energy a regular computer consumes, it will have difficulty coping with things that are easy for the brain, such as understanding a language or running up the stairs.

If we could create machines whose computing abilities and energy efficiency were comparable to the brain, then everything would change dramatically. Robots would move smartly in the physical world and communicate with us in natural language. Large-scale systems would collect vast amounts of information about business, science, medicine, or government, discovering new patterns, finding cause-and-effect relationships, and making predictions. Smart mobile apps like Siri and Cortana could rely less on the cloud. Such technology could allow us to create low-power devices that augment our senses, provide us with drugs, and emulate nerve signals to compensate for organ damage or paralysis.

But is it too early to set such bold goals for yourself? Is our understanding of the brain too limited for us to be able to create technologies that work based on its principles? I believe that emulating even the simplest features of neural circuits can dramatically improve the performance of many commercial applications. How accurately computers must copy the biological details of the brain in order to approach its level of performance is still an open question. But today's brain-inspired, or neuromorphic, systems will become important tools in the search for an answer.

A key feature of conventional computers is the physical separation of memory, which stores data and instructions, and the logic that processes this information. There is no such division in the brain. Computing and data storage occur simultaneously and locally, across a vast network of approximately 100 billion nerve cells (neurons) and more than 100 trillion connections (synapses). Much of the brain is defined by these connections and how each neuron responds to the input of other neurons.

When we talk about the exceptional capabilities of the human brain, we usually mean the recent acquisition of a long evolutionary process - the neocortex (new cortex). This thin and extremely folded layer forms outer shell brain and performs very different tasks, including processing information from the senses, motor control, working with memory and learning. Such a wide range of possibilities is available to a fairly homogeneous structure: six horizontal layers and a million vertical columns, 500 microns wide, consisting of neurons that integrate and distribute information encoded in electrical impulses along the antennae growing from them - dendrites and axons.

Like all cells in the human body, a neuron has an electrical potential of about 70 mV between the outer surface and the inside. This membrane voltage changes when the neuron receives signals from other neurons connected to it. If the membrane voltage rises to a critical value, it forms a pulse, or voltage surge, lasting a few milliseconds, of the order of 40 mV. This impulse travels along the neuron's axon until it reaches the synapse, a complex biochemical structure that connects the axon of one neuron to the dendrite of another. If the impulse satisfies certain restrictions, the synapse converts it into another impulse that travels down the branching dendrites of the neuron receiving the signal and changes its membrane voltage either positively or negatively.

Connectivity is a critical feature of the brain. A pyramidal neuron, a particularly important cell type in the human neocortex, contains about 30,000 synapses, that is, 30,000 input channels from other neurons. And the brain constantly adapts. The neuron and synapse properties—and even the network structure itself—change constantly, driven largely by sensory input and feedback. environment.

Modern computers general purpose digital, not analog; The brain is not easy to classify. Neurons accumulate electric charge, like capacitors in electronic circuits. This is clearly an analog process. But the brain uses bursts as units of information, and this is fundamentally a binary scheme: at any time, in any place, either there is a burst or there is not. In electronics terms, the brain is a mixed-signal system, with local analog computation and information transfer using binary bursts. Since a splash only has the values ​​0 or 1, it can pass long distance without losing this basic information. It also reproduces, reaching the next neuron in the network.

Another key difference brain and computer - the brain copes with information processing without a central clock generator synchronizing its work. Although we observe synchronizing events - brain waves - they organize themselves, arising as a result of the work of neural networks. What's interesting is that modern computer systems begin to adopt the asynchronicity inherent in the brain to speed up calculations by performing them in parallel. But the degree and purpose of parallelization of these two systems are extremely different.

The idea of ​​using the brain as a model for computing has deep roots. The first attempts were based on a simple threshold neuron, outputting one value if the sum of the weighted input data exceeds a threshold, and another if it does not. The biological realism of this approach, conceived by Warren McCulloch and Walter Pitts in the 1940s, is quite limited. However, this was the first step towards applying the firing neuron concept as a computational element.

In 1957, Frank Rosenblatt proposed another variant of the threshold neuron, the perceptron. A network of interconnected nodes ( artificial neurons) is composed in layers. The visible layers on the surface of the network interact with the outside world as inputs and outputs, and the hidden layers located inside perform all the calculations.

Rosenblatt also suggested tapping into a core feature of the brain: inhibition. Instead of adding all the inputs, the neurons in the perceptron can make a negative contribution. This feature allows neural networks to use a single hidden layer to solve XOR logic problems in which the output is true if only one of two binary inputs is true. This simple example shows that adding biological realism can add new computational capabilities. But which brain functions are necessary for its functioning, and which are useless traces of evolution? No one knows.

We know that impressive computational results can be achieved without attempting biological realism. Deep learning researchers have come a long way in using computers to analyze large amounts of data and extract specific features from complex images. Although the neural networks they created have big amount inputs and hidden layers than ever before, they are still based on extremely simple models neurons. Their ample opportunities reflect not biological realism, but the scale of the networks they contain and the power of the computers used to train them. But networks with deep learning is still a long way off from the computational speeds, energy efficiency and learning capabilities of a biological brain.

A huge gap between the brain and modern computers Large-scale brain simulations highlight this best. Behind last years Several such attempts have been made, but all of them were strictly limited by two factors: energy and simulation time. For example, consider a simulation conducted by Marcus Deisman and his colleagues several years ago using 83,000 processors on the K supercomputer in Japan. The simulation of 1.73 billion neurons consumed 10 billion times more energy than an equivalent section of the brain, even though they used extremely simplified models and did not perform any training. And such simulations typically ran more than 1,000 times slower than the real time of a biological brain.

Why are they so slow? Brain simulation on regular computers requires calculation of billions differential equations, interconnected, and describing the dynamics of cells and networks: analog processes such as the movement of charge along the cell membrane. Computers that use Boolean logic—trading energy for precision—and separate memory and computation are extremely inefficient at simulating the brain.

These simulations can become tools for understanding the brain, transferring data obtained in the laboratory into simulations that we can experiment with and then compare the results with observations. But if we hope to go in a different direction and use the lessons of neuroscience to create new computing systems, we need to rethink how we design and build computers.


Neurons in silicon.

Copying the brain using electronics may be more feasible than it seems at first glance. It turns out that approximately 10 fJ (10 -15 joules) are spent on creating an electrical potential in a synapse. The gate of the metal-oxide-semiconductor (MOS) transistor, which is much larger and consumes more power than those used in the CPU, requires only 0.5 fJ to charge. It turns out that synaptic transmission is equivalent to charging 20 transistors. Moreover, at the device level, biological and electronic circuits are not that different. In principle, it is possible to create structures like synapses and neurons from transistors, and connect them together to create an artificial brain that does not absorb such outrageous amounts of energy.

The idea of ​​​​creating computers using transistors that work like neurons appeared in the 1980s from Professor Carver Mead of Caltech. One of Mead's key arguments in favor of "neuromorphic" computers was that semiconductor devices could, when operating in a specific mode, follow the same physical laws as neurons, and that analog behavior could be used for calculations with great energy efficiency.

Mead's group also invented a neurocommunications platform in which bursts are encoded only by their network addresses and the time they occur. This work was groundbreaking because it was the first to make time a necessary feature of artificial neural networks. Time is a key factor for the brain. Signals need time to propagate, membranes need time to react, and it is time that determines the shape of postsynaptic potentials.

Several research groups active today, such as those of Giacomo Indiveri at ETH and Kwabena Boahen at Stanford, have followed in Mead's footsteps and successfully incorporated elements of biological cortical networks. The trick is to operate the transistors using current. low voltage, not reaching their threshold value, creating analog circuits, copying the behavior of the nervous system, and at the same time consuming little energy.

Further research in this direction may find application in systems such as brain-computer interfaces. But between these systems and actual size There is a huge gap in the network, connectivity and learning capacity of the animal brain.

So around 2005, three groups of researchers independently began to develop neuromorphic systems that differed significantly from Mead's original approach. They wanted to create large-scale systems with millions of neurons.

The closest project to conventional computers is SpiNNaker, led by Steve Furber of the University of Manchester. This group developed its own digital chip consisting of 18 ARM processors, operating at 200 MHz - about one tenth the speed of modern CPUs. Although ARM cores come from the world of classical computers, they simulate bursts sent through special routers designed to transmit information asynchronously - just like the brain. The current implementation, part of the EU's Human Brain Project and completed in 2016, contains 500,000 ARM cores. Depending on the complexity of the neuron model, each core is capable of simulating up to 1000 neurons.

The TrueNorth chip, developed by Dharmendra Moda and his colleagues at the IBM Almaden Research Laboratory, eschews the use of microprocessors as computing units and is in fact a neuromorphic system in which computation and memory are intertwined. TrueNorth still remains a digital system, but it is based on specially designed neural circuits that implement a specific neuron model. The chip contains 5.4 billion transistors and is built using Samsung's 28nm CMOS (complementary metal-oxide-semiconductor) technology. Transistors emulate 1 million neural circuits and 256 million simple (one-bit) synapses on a single chip.

I would say that the next project, BrainScaleS, moved quite far away from conventional computers and came closer to biological brain. My colleagues and I from the University of Heidelberg worked on this project for the European Human Brain Initiative. BrainScaleS implements mixed signal processing. It combines neurons and synapses, which are silicon transistors that operate as analog devices with digital exchange information. The full-size system consists of 8-inch silicon wafers and can emulate 4 million neurons and 1 billion synapses.

The system can play nine different modes firing of biological neurons, and was developed in close collaboration with neuroscientists. Unlike Mead's analog approach, BrainScaleS runs in accelerated mode, emulating 10,000 times faster than real time. This is especially useful for studying learning and development.

Learning is likely to be a critical component of neuromorphic systems. Now chips made in the image of the brain, as well as neural networks running on ordinary computers, are trained on the side using more powerful computers. But if we want to use neuromorphic systems in real applications– let’s say in robots that will have to work side by side with us, they will have to be able to learn and adapt on the fly.

In the second generation of our BrainScaleS system, we implemented learning capabilities by creating on-chip “flexibility engines.” They are used to change a wide range of parameters of neurons and synapses. This ability allows us to fine-tune parameters to compensate for differences in size and electrical properties as we move from one device to another, much like the brain itself adjusts to change.

The three large-scale systems I have described complement each other. SpiNNaker can be flexibly configured and used to test different neural models, TrueNorth has a high integration density, BrainScaleS is designed for continuous learning and development. The search for the right way to evaluate the effectiveness of such systems is still ongoing. But early results are also promising. IBM's TrueNorth group recently estimated that synaptic transmission in their system consumes 26 pJ. While this is 1,000 times the energy required in a biological system, it is almost 100,000 times less than the energy that goes into simulations on general purpose computers.

We are still in the early stages of understanding what such systems can do and how to apply them to real-world problems. At the same time, we must find ways to combine many neuromorphic chips into large networks with improved learning capabilities while reducing energy consumption. One problem is connectivity: the brain is three-dimensional, but our circuits are two-dimensional. The issue of three-dimensional circuit integration is now being actively studied, and such technologies can help us.

Another help can be devices that are not based on CMOS - memristors or PCRAM (phase change memory). Today, the weights that determine how artificial synapses respond to incoming signals are stored in conventional digital memory, which takes up most of the silicon resources needed to build the network. But other types of memory can help us reduce the size of these cells from micrometer to nanometer. And the main difficulty modern systems there will be support for differences between different devices. The calibration principles developed in BrainScaleS can help with this.

We have just begun our journey towards practical and useful neuromorphic systems. But the effort is worth it. If successful, we will not only create powerful computing systems; we may even gain new information about the workings of our own brains.

Our brains do not process information, retrieve knowledge, or store memories. Psychologist Robert Epstein, author of 15 books and former editor-in-chief of Psychology Today, is convinced of this. For many years he has been a vocal opponent of the view of the brain as a data-processing machine. The Futurist publishes an extensive article by Epstein that may completely change your understanding of the brain.

Copy paste

It is impossible to find a copy of Beethoven's Fifth Symphony in our brains. As well as copies of words, pictures, grammar rules or any other stimuli from the environment. The human brain, of course, cannot be called “empty”. However, he Not contains most of what everyone agrees it should contain. It doesn't even have something as simple as "memories".

Robert Epstein

Is our misconception about the brain has deep historical roots, but greatest harm brought about by the invention of computers in the 1940s. For more than half a century, psychologists, linguists and neurophysiologists have taken it for granted that the human brain works like a computer.

To see how ridiculous this idea is, think about children's brains . Thanks to evolution, newborns of Homo sapiens, like all other species of mammals, enter the world prepared . They are ready to interact effectively with the world. A baby's vision is blurry, but it can easily pick out faces. Quite quickly the baby remembers his mother's face. A child's hearing has a preference for voices, separating them from other sounds, and can also distinguish one manner of speech from another. Without a doubt, we are equipped to build social connections.

A healthy baby also has at least a dozen reflexes - ready-made reactions to certain stimuli that are important for survival. The newborn turns his head in the direction of the object that touches his cheek and sucks everything that comes into his mouth. He grabs the things we drop onto his open palm so tightly that he can support his own weight. Perhaps most importantly, children are born with powerful learning mechanisms , which allow them to quickly change , so that they can increasingly interact effectively with the world, even if that world is not like the one that greeted their distant ancestors.

Senses, reflexes and learning mechanisms are where we start, and that's a lot. If this set was missing something, it would be difficult to survive.

And here's what we're up to Not we are born : information, data, rules, lexicon, algorithms, programs, subroutines, models, memory, images, processors, encoders, decoders, symbols and clipboard. We do not have all the elements that allow digital computers to demonstrate more or less intelligent behavior. Not only are we not born With such a set of functions, we also do not develop them - ever.

We we don't store words or rules that dictate how words are manipulated. We do not create representations or display visual stimuli, we do not store them in the short-term memory buffer and do not transmit them to long term memory. We do not download information, pictures or words from memory. Computers perform all these actions, organisms do not.

Computers literally process information - numbers, letters, words, formulas, images. Information required encode into a format understandable to computers, that is, strings of ones and zeros (bits) collected in small groups (bytes). A certain sequence of these logic elements codes the letter D, another - O, the third - M. Placed side by side, these three letters form the word HOME. Any picture, for example a photograph of a cat, is represented by a very complex sequence of millions of bytes surrounded by special signs, which tell the computer that this location stores an image and not a word.

Of course, this is a very abstract introduction to computer theory, but it allows us to draw a simple conclusion: computers do work with symbolic representations of the world. They do store and retrieve. They really process. They have physical memory. All their actions, without exception, are guided by algorithms .

But People they don’t do any of this - they never did and never will. Taking this fact into account, let us ask ourselves: why do so many scientists talk about the life of our consciousness as if we were computers?

Four fluids that controlled the human body according to the ideas of the ancient Greeks and medieval astrologers (from here came the four types of temperament)

Metaphors of consciousness

In the book In Our Own Image (2015), artificial intelligence researcher George Zarkadakis describes six different metaphors , which people have used for the last 2000 years to explain human consciousness.

In the earliest metaphor we find traces of in the Bible, people are created from clay , into which the intelligent deity breathes spirit. This spirit “explains” that we are intelligent.

The discovery of the laws of hydraulics and the appearance of the first hydraulic structures in the 3rd century BC led to the growing popularity of the hydraulic model of consciousness. Philosophers decided that both the body and mental life are governed by various liquids in our body there are “humors”. The hydraulic metaphor persisted for 1600 years, greatly slowing progress in medical knowledge.

In the 1500s, automatic machines appeared, powered by springs and gears. As a result, leading thinkers such as Rene Descartes declared that people are complex cars . In the 1600s, British philosopher Thomas Hobbes suggested that our thoughts are the result of mechanical work small elements in the brain. By the 1700s, discoveries in the area electricity and chemistry led to the birth of new theories of human consciousness - again, mostly metaphorical. In the 1800s, inspired by recent advances in message transmission, German physicist Hermann von Helmholtz compared the brain to by telegraph .

Each metaphor reflected the most advanced ideas of its era. It is not surprising that just a few years after the birth of computer technology in the 1940s, many declared that the brain works like computer . In this case, the role of “hardware” is played by neurons, and thoughts are software. A key event in the development of what is now called "cognitive science" was the publication of Language and Communication in 1951. In it, psychologist George Miller proposed exploring thinking using concepts from information theory, cybernetics and linguistics.

This kind of theorizing received its highest expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated bluntly that the function of the human nervous system is digital. Recognizing that little is known about the mechanisms of thinking and memory, the scientist nevertheless drew many parallels between the components computers and elements of the human brain.

Futurist Ray Kurzweil

Advances in the development of computer technology and brain research have led to the emergence of a powerful interdisciplinary field. His goal was to understand the human mind. The approach was based on the belief that people, like computers, process information . Both are processors, literally “processors.” The field now employs thousands of researchers, gobbling up billions of dollars in grants and writing vast numbers of technical manuals and popular articles and books. An example of this approach is Ray Kurzweil's latest book, How to Create a Mind: Unlocking the Secrets of Human Thinking. In it, the futurologist writes about “algorithms” in the brain, about how the brain “processes data,” even about the external similarity of neural and electronic networks.

Sticky metaphor

Metaphor " information processing"(IO) today dominates our understanding of the functioning of consciousness. It is unlikely that one can find any form of study of intelligent human behavior that dispenses with the use of this metaphor - just as in previous eras it was impossible to talk about consciousness without mentioning spirit or deities . The validity of the IO metaphor in today's world is taken for granted.

However IO metaphor is ultimately another metaphor, a story we tell to make sense of something we don't understand. Like all previous metaphors, at some point it will have to be abandoned - replaced by a new metaphor or, if you're lucky, real knowledge.

The very idea that people should process information simply because computers process information is frankly stupid . And when one day the IO metaphor is finally abandoned, historians will surely look at our views with derision, just as we today find the hydraulic and mechanical metaphors stupid.

The dollar experiment

To demonstrate falsity IO Metaphors During a lecture, Robert Epstein usually calls a volunteer and asks him to draw a $1 bill on the board as realistically as possible. When the student succeeds, the psychologist covers the image with a sheet of paper, attaches a real banknote next to it and asks the volunteer to repeat the procedure. When the task is completed, the audience is asked to compare the results.

As a rule, students are surprised by how little resemblance in two images. The drawing from memory cannot be compared with the second picture, copied from the original. At the same time, each of the students saw a dollar bill thousands of times.

What's the problem? Don't we have a "representation" of the banknote in our brains that is "stored" in the "data register" of our memory? Can't we just "extract" the picture and use it to draw a copy?

Obviously not, and even in a thousand years neuroscience will not discover " display » dollar bill stored in the human brain for the simple reason that it is there No .

A large number of articles about the brain tell us that even the simplest memories involve many areas of the brain, sometimes quite extensive. When it comes to strong emotions, the activity of millions of neurons can increase simultaneously. Neuropsychologists from the University of Toronto studied plane crash survivors and found that memories of the tragedy involved many different zones, including the amygdala and visual cortex.

What happens when a student draws a dollar from memory? Since he saw the bill repeatedly, his brain changed . More specifically, the neural network has changed in such a way that the student can visualize banknote - that is, re-experience the dollar's vision, at least to a certain extent.

The difference between the two drawings is a reminder that visualizing something (seeing in the absence of the object) is much less accurate than direct observation. This is why we are much better at recognition than at recall. When we remember, we try to relive some experience. When we learn something, we just need to realize that we have already experienced the same thing before.

Even if a student made a conscious effort to remember the bill in every detail, the picture could not be said to be “stored” in the brain. Just a student better prepared to drawing a dollar from memory. In the same way, a pianist, through practice, becomes more experienced and plays a concerto better, but does not need to do so in any way. breathe a copy of the score.

Conductor Arturo Toscanini had a photographic memory and could reproduce a 2.5-hour opera without a score, but he did not need to “download” it into his brain - he lived it anew every time

Brain without information

Starting with this simple exercise, we can begin to new theory reasonable human behavior without any metaphors. In this theory the brain will not be completely empty, but at least we will do without the baggage of the IO metaphor.

In his life a person experiences various experience , which changes it. Three types of experiences deserve special mention: 1. We we observe what is happening around us (how other people behave, how the music sounds, what instructions we are given, how words look on the page and pictures on the screen). 2. We find that unimportant stimuli (such as the sound of a siren) go in conjunction with important ones (for example, the appearance of police cars). 3. Us punish or encourage for behaving in a certain way.

To be more successful members of our species, we we change in a way that better suits these types of experiences. If we can recite a poem by heart or sing a song, if we can follow instructions, if we respond to secondary stimuli in the same way as to primary ones, if we behave in a way that earns the approval of others, in all these cases social device increases.

Despite the headlines, no one still knows how the brain changes during memorization of a song or poem. However, neither the song nor the poems are “saved” in the mind. Just the brain changes in an orderly manner, so that we can now, under certain conditions, sing a song or recite poetry. When the time comes to do this, the song and lyrics are not "retrieved" from a specific location in the brain, any more than finger movements are "retrieved" from memory when we knock on a table. We simply sing or read poetry - without any extraction.

Recently, more and more cognitive psychologists have appeared who completely abandon the “computer” view of the brain. Among them is, for example, Anthony Chemero from the University of Cincinnati. Together with his colleagues, he insists that organisms are in direct contact with your surroundings. This becomes the basis for a new description of intelligent behavior.

Here's another example of how different approaches to consciousness from an “information processing” and new “anti-representational” perspective. In 2002, scientists from Arizona State University described two possible views on simple action in sports: baseball player trying to catch a flying one ball . According to the IO metaphor, the player's brain needs to evaluate initial conditions the flight of the ball - speed, angle, trajectory - then create and analyze an internal model of movement, predict where the ball will end up in the future, and based on this model adapt body movements in real time and catch the ball.

All this would take place if if we functioned like computers. However, the author of the work, Michael McBeath and his colleagues, were able to explain what was happening much more simply: in order to catch the ball, the baseball player simply needs to move in such a way that the ball is in constant visual contact with "home" (the corner of the square in baseball where the batsman stands) and surrounding objects. It sounds complicated, but in fact it is incredibly simple and does not require any calculations, mappings or algorithms.

Psychologists Andrew Wilson and Sabrina Golonka from Leeds City University in the UK have been blogging for years, collecting evidence similar to the baseball example. They describe their goal as follows:

"We strive for a more coherent, more naturalistic approach to the rigorous study of human behavior that does not fit into mainstream views in the cognitive sciences."

However, Wilson and Golonka in the minority . The vast majority of brain researchers still actively use the IO metaphor. Moreover, a huge amount predictions is done by comparing the brain with a computer. For example, you've probably read that it will be possible in the future upload human consciousness into a computer and that this will make us incredibly smart and possibly immortal. Similar predictions were made by Ray Kurzweil and Stephen Hawking, among others. The same idea became the premise of the film Supremacy with Johnny Depp, where the hero uploads his brain to the Internet and begins to terrorize humanity.

Fortunately, such misfortunes do not threaten us, since the IO metaphor has no basis. We will never have to worry about people going crazy in cyberspace. However, there is bad news: to achieve immortality It won't work either by moving to a computer. Not only because there is no "consciousness program" in the brain, but also because Epstein calls uniqueness problem . And this is the most important thing in his theory.

The problem of uniqueness

Since the brain has neither a “data store” nor a “representation” of stimuli, and because the brain needs to be changed by experience to function successfully, there is no reason to believe that two people will change in the same way under the influence of the same event . You came to the concert to listen to Beethoven's Fifth Symphony. Most likely, the changes that occur in your brain will be very different from the changes that occur in the brain of the person in the next chair. Whatever these changes are, they occur in a unique configuration of neurons that has evolved over decades unique experience .

In his classic 1932 work, British psychologist Sir Frederick Bartlett showed that two people repeat heard history differently. Moreover, over time, each listener's version differs more and more. None of the listeners creates a “copy” of the story; instead, each person is changed by the story—enough to be able to retell it later. After days, months, and even years, subjects may relive history, although not in all details.

On the one hand, this is very inspires . Every person on earth is truly unique , not only in the sense of genetics, but also in terms of the structure of its gray matter. However, it is also discouraging because the neuropsychologist's task becomes unimaginably difficult. Every experience produces an orderly change that involves thousands, millions of neurons, or even the entire brain, and the configuration of these changes will be different for each person.

Moreover, even if we had the technology to do snapshot 86 billion neurons and then run a simulation of them inside a computer, this huge structure won't mean anything outside the brain that gave her life. It is perhaps in this aspect that the IO metaphor has most distorted our understanding of the functioning of the mind. In computers, it is possible to store exact copies, and these copies do not change over time, even if the power source is turned off. However, the brain supports our mind only as long as it remains alive . Either the brain continues to function, or we disappear.

In his book The Future of the Brain, neuroscientist Stephen Rose also showed that a snapshot of the brain at a certain moment may be useless if we do not know everything. life story its owner - perhaps even such details as the social conditions in which the person spent his childhood.

That's how complex the problem is. To understand even the basics of how the brain supports intelligence, we need to know not only the state of the 86 billion neurons and 100 trillion connections between them in this moment, not just the intensity with which neurons exchange signals, not just the states of the more than 1,000 proteins that exist at each synapse, but also how brain activity from one moment to the next contributes to the integrity of the entire systems . Add to this the uniqueness of each brain (a consequence of the unique biography of its owner) and then you can understand why neuroscientist Kenneth Miller, in a recent New York Times editorial, suggested that understanding the basic laws of neural connections will take “centuries.”

Meanwhile, huge amounts of money are spent to brain research based on false premises. The most egregious case reported by Scientific American last year involved a large-scale initiative Human Brain Project . The EU spent more on the project billion dollars. The head of the collaboration, the charismatic Henry Markram, managed to convince sponsors that in 2023 he would be able to create a simulation of an entire brain using a supercomputer and that this would lead to a revolution in the search for a cure for Alzheimer's disease. EU scientific departments have given scientists carte blanche. The result? The scientific community rebelled against the too narrow approach to the problem and the unwise spending of funds, Markram was forced to leave the project, and the entire initiative was in limbo.

Henry Markramtalks about the projectHuman Brain Projectat the TED conference

Robert Epstein concludes the article with the following appeal:

“We are organisms, not computers. Let's continue to try to understand the human mind without being bogged down by unnecessary intellectual baggage. The metaphor of “information processing” celebrated its half-century anniversary, but did not bring too many revelations. It's time to press the Delete key.

Afterword "Futurist"

On the Aeon magazine website, Robert Epstein's article sparked a lively discussion and was heavily criticized. Readers left more than 400 comments. Many accused the author of not providing enough arguments to support his thesis and describing the position of his opponents too roughly. The metaphor of “information processing” does not put the brain in the same category as computers. Of course, individual neurons cannot be carriers of memories, and representations in the brain are not like copies of pictures and words. Nevertheless, “information” is a broad enough concept that it can be applied in both cybernetics and neuroscience. Even those readers who agreed with the main message of the article accused Epstein of going too far: the psychologist got carried away with the revelations and ended up painting an overly simplified picture.

At the same time, many readers agreed that “downloading the brain into a computer” is a bad idea, and supported the author in his call to consider the brain as a unique living organism, and not as a soulless data processing machine.


Your brain does not process information, retrieve knowledge, or store memories. In short, your brain is not a computer. American psychologist Robert Epstein explains why thinking of the brain as a machine is ineffective either for the advancement of science or for understanding human nature.

Robert Epstein is a senior psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books and the former editor-in-chief of Psychology Today.

Despite their best efforts, neuroscientists and cognitive psychologists will never find a copy of Beethoven's Fifth Symphony, words, pictures, grammatical rules or any other external cues in the brain. Of course, the human brain is not completely empty. But it doesn't contain most of the things people think it contains - even simple things like "memories".

Our misconceptions about the brain have deep historical roots, but we are especially confused by the invention of computers in the 1940s. For half a century, psychologists, linguists, neuroscientists and other experts on human behavior have argued that the human brain works like a computer.

To get an idea of ​​how frivolous this idea is, consider the brains of babies. A healthy newborn has more than ten reflexes. He turns his head in the direction where his cheek is scratched and sucks everything that comes into his mouth. He holds his breath when immersed in water. He grabs things in his hands so tightly that he can almost support his own weight. But perhaps most importantly, newborns have powerful learning mechanisms that allow them to change quickly so they can interact more effectively with the world around them.

Feelings, reflexes and learning mechanisms are what we have from the very beginning, and when you think about it, that's quite a lot. If we lacked any of these abilities, we would probably have difficulty surviving.

But here's what we don't have since birth: information, data, rules, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols and buffers - the elements that allow digital computers to behave somewhat rationally. Not only are these things not in us from birth, they do not develop in us during life.

We don't keep words or rules that tell us how to use them. We do not create images of visual impulses, store them in a short-term memory buffer, and then transfer the images to a long-term memory device. We do not recall information, images or words from the memory register. All this is done by computers, but not by living beings.

Computers literally process information - numbers, words, formulas, images. The information must first be translated into a format that a computer can recognize, that is, into sets of ones and zeros (“bits”) collected into small blocks (“bytes”).

Computers move these sets from place to place into various areas of physical memory, implemented as electronic components. Sometimes they copy sets, and sometimes they transform them in various ways - say, when you correct errors in a manuscript or retouch a photograph. The rules that a computer follows when moving, copying, or working with an array of information are also stored inside the computer. A set of rules is called a "program" or "algorithm". A set of algorithms working together that we use for different purposes (for example, buying stocks or dating online) is called an “application”.

These are known facts, but they need to be spelled out to make things clear: computers operate on a symbolic representation of the world. They do store and retrieve. They really process. They do have physical memory. They are truly driven by algorithms in every way.

However, people don’t do anything like that. So why do so many scientists talk about our mental activity as if we were computers?

In 2015, artificial intelligence expert George Zarkadakis released a book, In Our Image, in which he describes six different concepts that people have used over the past two thousand years to describe human intelligence.

In the earliest version of the Bible, humans were created from clay or mud, which an intelligent God then imbued with his spirit. This spirit “describes” our mind - at least from a grammatical point of view.

The invention of hydraulics in the 3rd century BC led to the popularity of the hydraulic concept of human consciousness. The idea was that the flow of various fluids in the body - "bodily fluids" - accounted for both physical and spiritual functions. The hydraulic concept persisted for more than 1,600 years, all the while hampering the development of medicine.

By the 16th century, devices powered by springs and gears had appeared, which inspired René Descartes to argue that man is a complex machine. In the 17th century, British philosopher Thomas Hobbes proposed that thinking occurs through small mechanical movements in the brain. By the beginning of the 18th century, discoveries in the field of electricity and chemistry led to the emergence of a new theory of human thinking, again of a more metaphorical nature. In the mid-19th century, German physicist Hermann von Helmholtz, inspired by recent advances in communications, compared the brain to a telegraph.

Mathematician John von Neumann stated that the function of the human nervous system is "digital in the absence of evidence to the contrary", drawing parallels between the components of computer machines of the time and areas of the human brain.

Each concept reflects the most advanced ideas of the era that gave birth to it. As one might expect, just a few years after the birth of computer technology in the 1940s, it was argued that the brain worked like a computer: the brain itself played the role of the physical carrier, and our thoughts acted as the software.

This view reached its zenith in the 1958 book The Computer and the Brain, in which mathematician John von Neumann stated emphatically that the function of the human nervous system is “digital in the absence of evidence to the contrary.” Although he acknowledged that very little is known about the role of the brain in the functioning of intelligence and memory, the scientist drew parallels between the components of computer machines of that time and areas of the human brain.

Thanks to subsequent advances in computer technology and brain research, an ambitious interdisciplinary study of human consciousness gradually developed, based on the idea that people, like computers, are information processors. This work now includes thousands of studies, receives billions of dollars in funding, and has been the subject of numerous papers. Ray Kurzweil's 2013 book How to Create a Mind: Unraveling the Mystery of Human Thinking illustrates this point, describing the brain's "algorithms", its "information processing" techniques, and even how it superficially resembles integrated circuits in its structure.

The idea of ​​human thinking as an information processing device (IP) currently dominates in human consciousness both among ordinary people and among scientists. But this is, in the end, just another metaphor, a fiction that we pass off as reality to explain something we don’t really understand.

The imperfect logic of the OR concept is quite easy to formulate. It is based on a fallacious syllogism with two reasonable assumptions and a wrong conclusion. Reasonable Assumption #1: All computers are capable of intelligent behavior. Reasonable Assumption #2: All computers are information processors. Incorrect conclusion: all objects capable of behaving intelligently are information processors.

If we forget about formalities, then the idea that people should be information processors just because computers are such is complete nonsense, and when the concept of AI is finally abandoned, historians will probably view it from the same point of view as now To us, the hydraulic and mechanical concepts look like nonsense.

Carry out an experiment: draw a hundred-ruble bill from memory, and then take it out of your wallet and copy it. Do you see the difference?

A drawing made in the absence of an original will certainly turn out to be terrible in comparison with a drawing made from life. Although, in fact, you have seen this bill more than one thousand times.

What is the problem? Shouldn't the "image" of the banknote be "stored" in the "storage register" of our brain? Why can't we just "refer" to this "image" and depict it on paper?

Obviously not, and thousands of years of research will not allow us to determine the location of the image of this bill in the human brain simply because it is not there.

The idea, promoted by some scientists, that individual memories are somehow stored in special neurons is absurd. Among other things, this theory takes the question of the structure of memory to an even more intractable level: how and where is memory stored in cells?

The very idea that memories are stored in individual neurons is absurd: how and where in a cell can information be stored?

We will never have to worry about the human mind running amok in cyberspace, and we will never be able to achieve immortality by downloading our soul to another medium.

One of the predictions, which was expressed in one form or another by futurist Ray Kurzweil, physicist Stephen Hawking and many others, is that if human consciousness is like a program, then technologies should soon appear that will allow it to be loaded onto a computer, thereby greatly enhancing intellectual abilities and making immortality possible. This idea formed the basis of the plot of the dystopian film Transcendence (2014), in which Johnny Depp played a scientist similar to Kurzweil. He uploaded his mind to the Internet, causing devastating consequences for humanity.

Fortunately, the concept of OI has nothing even close to reality, so we don't have to worry about the human mind running amok in cyberspace, and sadly, we'll never be able to achieve immortality by downloading our souls to another medium. It's not just a lack of software in the brain, the problem is even deeper - let's call it the problem of uniqueness, and it is both fascinating and depressing.

Since our brains have neither “memory devices” nor “images” of external stimuli, and the brain changes over the course of life under the influence of external conditions, there is no reason to believe that any two people in the world will react to the same stimulus in the same way. If you and I attend the same concert, the changes that happen in your brain after listening will be different from the changes that happen in my brain. These changes depend on the unique structure of nerve cells, which was formed during the entire previous life.

This is why, as Frederick Bartlett wrote in his 1932 book Memory, two people hearing the same story will not be able to retell it exactly the same way, and over time their versions of the story will become less and less similar to each other.

I think this is very inspiring because it means that each of us is truly unique, not only in our genes, but also in the way our brains change over time. But it's also disheartening, because it makes the already difficult work of neuroscientists almost impossible to solve. Each change can affect thousands, millions of neurons or the entire brain, and the nature of these changes is also unique in each case.

Worse, even if we could record the state of each of the brain's 86 billion neurons and simulate it all on a computer, this enormous model would be useless outside the body to which the brain belongs. This is perhaps the most annoying misconception about the human structure, which we owe to the erroneous concept of OI.

Computers store exact copies of data. They can remain unchanged for a long time even when the power is turned off, while the brain supports our intelligence only as long as it remains alive. There is no switch. Either the brain will work without stopping, or we will not exist. Moreover, as neuroscientist Stephen Rose noted in 2005's The Future of the Brain, a copy of the brain's current state may be useless without knowing the full biography of its owner, even including the social context in which the person grew up.

Meanwhile, huge amounts of money are spent on brain research based on false ideas and promises that will not be fulfilled. Thus, the European Union launched a project to study the human brain worth $1.3 billion. European authorities believed the tempting promises of Henry Markram to create a working simulator of brain function based on a supercomputer by 2023, which would radically change the approach to the treatment of Alzheimer's disease and other ailments, and provided the project with almost unlimited funding. Less than two years after the project launched, it turned out to be a failure, and Markram was asked to resign.

People are living organisms, not computers. Accept it. We need to continue the hard work of understanding ourselves, but not waste time with unnecessary intellectual baggage. Over the half-century of its existence, the concept of OR has given us only a few useful discoveries. It's time to click on the Delete button.