How to create your own neural network from scratch in Python. Neural networks for beginners – neural network for dummies

To bookmarks

We tell you how to create a simple neural network in a few steps and teach it to recognize famous entrepreneurs in photographs.

Step 0. Let's understand how neural networks work

The easiest way to understand the principles of operation of neural networks is to use the example of Teachable Machine, an educational project by Google.

Teachable Machine uses an image from a laptop camera as input data - what needs to be processed by the neural network. As output data - what the neural network should do after processing the incoming data - you can use a gif or sound.

For example, you can teach Teachable Machine to say “Hi” when your palm is raised up. With a thumbs up - “Cool”, and with a surprised face with an open mouth - “Wow”.

First you need to train the neural network. To do this, raise your palm and press the “Train Green” button - the service takes several dozen pictures to find a pattern in the images. A set of such images is usually called a “dataset”.

Now all that remains is to select the action that needs to be called when recognizing an image - say a phrase, show a GIF or play a sound. Similarly, we train a neural network to recognize a surprised face and a thumb.

Once the neural network is trained, it can be used. Teachable Machine shows the "confidence" factor - how "confident" the system is that it is being taught one of the skills.

A short video about how the Teachable Machine works

Step 1. Preparing the computer to work with a neural network

Now we will create our own neural network, which, when sending an image, will report what is shown in the picture. First, we will teach the neural network to recognize flowers in the picture: chamomile, sunflower, dandelion, tulip or rose.

To create your own neural network, you will need Python, one of the most minimalistic and common programming languages, and TensorFlow, Google's open library for creating and training neural networks.

Neural networks are now in fashion, and for good reason. With their help, you can, for example, recognize objects in pictures or, conversely, draw the nightmares of Salvador Dali. Thanks to convenient libraries, the simplest neural networks can be created in just a couple of lines of code; it won’t take much more to turn to IBM artificial intelligence.

Theory

Biologists still do not know exactly how the brain works, but the principle of operation of individual elements of the nervous system has been well studied. It consists of neurons - specialized cells that exchange electrochemical signals with each other. Each neuron has many dendrites and one axon. Dendrites can be compared to the inputs through which data enters a neuron, while the axon serves as its output. The connections between dendrites and axons are called synapses. They not only transmit signals, but can also change their amplitude and frequency.

The transformations that occur at the level of individual neurons are very simple, but even very small neural networks are capable of a lot. The entire diversity of behavior of the Caenorhabditis elegans worm - movement, search for food, various reactions to external stimuli and much more - is encoded in just three hundred neurons. And okay worms! Even ants need 250 thousand neurons, and what they do is definitely beyond the capabilities of machines.

Almost sixty years ago, American researcher Frank Rosenblatt tried to create a computer system designed in the image of the brain, but the possibilities of his creation were extremely limited. Since then, interest in neural networks has flared up more than once, but time after time it turned out that the computing power is not enough for any advanced neural networks. A lot has changed in this regard over the past decade.

Electromechanical brain with motor

Rosenblatt's machine was called the Mark I Perceptron. It was designed for image recognition, a task that computers are still only so-so at. The Mark I was equipped with something like a retina: a square array of 400 photocells, twenty vertically and twenty horizontally. Photocells were randomly connected to electronic models of neurons, and they, in turn, were connected to eight outputs. Rosenblatt used potentiometers as synapses connecting electronic neurons, photocells and outputs. When training the perceptron, 512 stepper motors automatically rotated the potentiometer knobs, adjusting the voltage on the neurons depending on the accuracy of the output result.

That's how a neural network works in a nutshell. An artificial neuron, like a real one, has several inputs and one output. Each input has a weighting factor. By changing these coefficients, we can train the neural network. The dependence of the output signal on the input signals is determined by the so-called activation function.

In Rosenblatt's perceptron, the activation function added the weight of all inputs that received a logical one, and then compared the result with a threshold value. Its disadvantage was that a slight change in one of the weighting coefficients with this approach can have a disproportionately large impact on the result. This makes learning difficult.

Modern neural networks typically use nonlinear activation functions, such as the sigmoid. In addition, old neural networks had too few layers. Nowadays, one or more hidden layers of neurons are usually located between the input and output. That's where all the fun happens.

To make it easier to understand what we are talking about, look at this diagram. It is a feedforward neural network with one hidden layer. Each circle corresponds to a neuron. On the left are the neurons of the input layer. On the right is the output layer neuron. In the middle there is a hidden layer with four neurons. The outputs of all neurons in the input layer are connected to each neuron of the first hidden layer. In turn, the inputs of the output layer neuron are connected to all the outputs of the hidden layer neurons.

Not all neural networks are designed this way. For example, there are (albeit less common) networks in which the signal from the neurons is supplied not only to the next layer, like the forward propagation network from our diagram, but also in the opposite direction. Such networks are called recurrent. Fully connected layers are also just one option, and we'll even touch on one of the alternatives.

Practice

So, let's try to build a simple neural network with our own hands and figure out how it works as we go. We will use Python with the Numpy library (we could do without Numpy, but with Numpy linear algebra will take less effort). The example in question is based on code from Andrew Trask.

We will need functions to calculate the sigmoid and its derivative:

Continuation is available only to subscribers

Option 1. Subscribe to Hacker to read all materials on the site

Subscription will allow you to read ALL paid materials on the site within the specified period.

We accept payments by bank cards, electronic money and transfers from mobile operator accounts.

Hi all!

The impressions from the first sections are wonderful. One of the best introductions to the field of neural networks that I have ever seen. I liked the book so much that I decided to translate it into Russian and post it here in the form of articles. Some of the material from the book will be used to improve existing chapters, and some to the next ones.

I have already translated the first two sections of Chapter 1. You can these sections.

Read - enjoy!

Chapter 1 How they work.

1.1 Easy for me, hard for you

All computers are calculators at heart. They can count very quickly.

You shouldn't blame them for this. They do their job well: they calculate the price taking into account the discount, calculate debt interest, draw graphs based on available data, and so on.

Even watching TV or listening to music on a computer involves doing a huge amount of arithmetic operations over and over again. It may sound surprising, but rendering each frame of an image from zeros and ones obtained over the Internet involves calculations that are not much more complex than the problems we all solved in school.

However, a computer's ability to add thousands and millions of numbers per second is not artificial intelligence at all. It is difficult for a person to add numbers so quickly, but you must agree that this work does not require serious intellectual expenditure. You must adhere to a previously known algorithm for adding numbers and nothing more. This is what all computers do - they adhere to a clear algorithm.

Everything is clear with computers. Now let's talk about what we are good at compared to them.

Look at the pictures below and determine what they show:

You see the people's faces in the first picture, the cat's face in the second and the tree in the third. Have you recognized the objects in these pictures? Please note that you only need a glance to unmistakably understand what is depicted on them. We are rarely wrong about such things.

We instantly and without much difficulty perceive the huge amount of information that images contain and very accurately identify objects in them. But for any computer such a task will be a challenge.

Any computer, regardless of its complexity and speed, lacks one important quality - intelligence, which every person possesses.

But we want to teach computers to solve such problems because they are fast and do not get tired. Artificial intelligence is specifically engaged in solving problems of this kind.

Of course, computers will continue to consist of microcircuits. The task of artificial intelligence is to find new algorithms computer work that will allow you to solve intellectual problems. These algorithms are not always perfect, but they get the job done and make the computer appear to behave like a human.

Key points

  • There are tasks that are easy for ordinary computers, but challenging for humans. For example, multiplying a million numbers by each other.
  • On the other hand, there are equally important tasks that are incredibly difficult for a computer and do not cause problems for humans. For example, recognizing faces in photographs.

1.2 Simple predictive machine

Let's start with something very simple. Further we will build on the material studied in this section.

Imagine a machine that receives a question, “thinks” about it, and then produces an answer. In the example above, you received a picture as input, analyzed it with the help of your brain and made a conclusion about the object that is depicted in it. It looks something like this:

Computers don't really "think" anything through. They simply apply previously known arithmetic operations. So let's call a spade a spade:

The computer takes some data as input, performs the necessary calculations and produces a finished result. Consider the following example. If the computer receives the expression ​\(3 \times 4 \) ​ as input, it is converted into a simpler sequence of additions. As a result, we get the result - 12.

Doesn't look too impressive. This is fine. With these trivial examples you will see the idea that neural networks implement.

Now imagine a machine that converts kilometers to miles:

Now imagine that we do not know the formula that converts kilometers into miles. We only know that the relationship between these two quantities linear. This means that if we double the distance in miles, then the distance in kilometers will also double. It's intuitive. The universe would be very strange if this rule did not apply.

The linear relationship between kilometers and miles gives us a hint in what form we need to convert one quantity to another. We can represent this dependency like this:

\[ \text(miles) = \text(kilometers) \times C \]

In the expression above, ​\(C\) ​ acts as a constant number. We don’t yet know what ​\(C\) ​ is equal to.

The only thing we know is several correctly measured distances in kilometers and miles.

And how can you find out the value of ​\(C\) ​? Let's just make it up random number and say that our constant is equal to it. Let ​\(C = 0.5\) ​. What will happen?

Assuming that ​\(C = 0.5\) ​ we get 50 miles out of 100 kilometers. This is an excellent result considering the fact that we chose ​\(C = 0.5\) ​ completely randomly! But we know that our answer is not entirely correct, because according to the table of correct measurements, we should have received 62.137 miles.

We missed by 12,137 miles. This is ours error- the difference between the received answer and the previously known correct result, which in this case we have in the table.

\[ \begin(gather*) \text(error) = \text(correct value) - \text(answer received) \\ = 62.137 - 50 \\ = 12.137 \end(gather*) \]

Let's look at the error again. The resulting distance is shorter by 12.137. Since the formula for converting kilometers to miles is linear (​ \(\text(miles) = \text(kilometers) \times C\)​), then increasing the value of ​\(C \) ​ will also increase the output in miles.

Let's now assume that ​\(C = 0.6\) ​ and see what happens.

Since ​\(C=0.6 \) ​, then for 100 kilometers we have ​\(100 \times 0.6 = 60 \) ​ miles. This is much better than the previous attempt (that time was 50 miles)! Now our error is very small - only 2.137 miles. Quite an accurate result.

Now notice how we used the resulting error to adjust the value of the constant ​\(C\) ​. We needed to increase the output number of miles and we increased the value of ​\(C\) ​ a little. Note that we are not using algebra to get the exact value of ​\(C\) ​, but we could. Why? Because the world is full of problems that do not have a simple mathematical connection between the input received and the output produced.

It is for problems that practically cannot be solved by simple calculation that we need such sophisticated things as neural networks.

My God! We grabbed too much and exceeded the correct result. Our previous error was 2.137, and now it is -7.863. A minus means that our result turned out to be greater than the correct answer, since the error is calculated as the correct answer - (minus) the answer received.

It turns out that with ​\(C=0.6 \) ​ we have a much more accurate output. This could have been the end of it. But let's still increase ​\(C\) ​, but not much! Let ​\(C=0.61 \) ​.

That's better! Our car gives 61 miles, which is only 1.137 miles less than the correct answer (62.137).

There is an important lesson to be learned from this situation of exceeding the correct answer. As you get closer to the correct answer, you should change the machine parameters less and less. This will help avoid unpleasant situations that lead to exceeding the correct answer.

The amount of our adjustment ​\(C\) ​ depends on the error. The larger our error, the more we change the value of ​\(C\) ​. But when the error becomes small, it is necessary to change ​\(C\) ​ a little bit. Logical, right?

Believe it or not, you have just understood the essence of how neural networks work. We train the “machines” to gradually produce more and more accurate results.

It is important to understand how we solved this problem. We did not solve it in one go, although in this case it could have been done that way. Instead, we arrived at the correct answer step by step so that with every step our results became better.

Aren't the explanations very simple and understandable? Personally, I have never seen a more concise way to explain what neural networks are.

If you don't understand something, ask questions on the forum.

Your opinion is important to me - leave comments :)

The correct formulation of the question should be: how to train your own neural network? You don’t need to write the network yourself; you need to take one of the ready-made implementations, of which there are many; previous authors provided links. But this implementation itself is similar to a computer into which no programs have been downloaded. In order for the network to solve your problem, it needs to be taught.

And here comes the most important thing that you need for this: DATA. There are many examples of problems that will be fed to the input of the neural network, and the correct answers to these problems. The neural network will learn from this to independently give these correct answers.

And here a bunch of details and nuances arise that you need to know and understand so that all this has a chance to give an acceptable result. It’s impossible to cover them all here, so I’ll just list some points. Firstly, the volume of data. This is a very important point. Large companies whose activities are related to machine learning usually have special departments and staff dedicated only to collecting and processing data for training neural networks. Often data has to be purchased, and all this activity results in a significant expense item. Secondly, the presentation of the data. If each object in your problem is represented by a relatively small number of numerical parameters, then there is a chance that they can be given directly to the neural network in such a raw form and get an acceptable output result. But if the objects are complex (pictures, sound, objects of variable dimensions), then most likely you will have to spend time and effort on extracting from them meaningful features for the problem being solved. This alone can take a lot of time and have a much greater impact on the final result than even the type and architecture of the neural network chosen for use.

There are often cases when real data turns out to be too raw and unsuitable for use without preliminary processing: it contains omissions, noise, inconsistencies and errors.

Data should also be collected not just anyhow, but competently and thoughtfully. Otherwise, the trained network may behave strangely and even solve a completely different problem than the author intended.

You also need to imagine how to properly organize the learning process so that the network does not become overtrained. The complexity of the network must be chosen based on the dimension of the data and its quantity. Some of the data should be set aside for testing and not used during training in order to assess the real quality of work. Sometimes different objects in the training set need to be assigned different weights. Sometimes it is useful to vary these weights during the learning process. Sometimes it is useful to start training on part of the data, and add the remaining data as training progresses. In general, this can be compared to cooking: each housewife has her own techniques for preparing even the same dishes.


Many of the terms in neural networks are related to biology, so let's start at the beginning:

The brain is a complex thing, but it can be divided into several main parts and operations:

The causative agent may be internal(for example, an image or an idea):

Now let's take a look at the basic and simplified parts brain:


The brain is generally like a cable network.

Neuron- the basic unit of calculation in the brain, it receives and processes chemical signals from other neurons, and, depending on a number of factors, either does nothing or generates an electrical impulse, or Action Potential, which then sends signals through synapses to neighboring ones related neurons:

Dreams, memories, self-regulating movements, reflexes and in general everything you think or do - everything happens thanks to this process: millions, or even billions of neurons work at different levels and create connections that create various parallel subsystems and represent the biological neuron net.

Of course, these are all simplifications and generalizations, but thanks to them we can describe a simple
neural network:

And describe it formally using a graph:

Some clarification is required here. The circles are neurons, and the lines are connections between them,
and to keep things simple at this point, relationships represent the direct movement of information from left to right. The first neuron is currently active and highlighted in gray. We also assigned a number to it (1 if it works, 0 if it doesn’t). The numbers between neurons show weight communications.

The graphs above show the moment in time of the network; for a more accurate display, you need to divide it into time periods:

To create your own neural network, you need to understand how weights affect neurons and how neurons learn. As an example, let's take a rabbit (test rabbit) and put it under the conditions of a classic experiment.

When a safe stream of air is directed at them, rabbits, like people, blink:

This behavior model can be depicted in graphs:

As in the previous diagram, these graphs show only the moment when the rabbit feels the breath, and we thus encode whiff as a boolean value. In addition, we calculate whether the second neuron fires based on the weight value. If it is equal to 1, then the sensory neuron fires, we blink; if the weight is less than 1, we do not blink: the second neuron limit- 1.

Let's introduce one more element - a safe sound signal:

We can model a rabbit's interest like this:

The main difference is that now the weight is equal to zero, so we didn’t get a blinking rabbit, well, not yet, at least. Now let's teach the rabbit to blink on command by mixing
Stimuli (beep and blow):

It is important that these events occur at different times era, in graphs it will look like this:

The sound itself doesn't do anything, but the airflow still causes the rabbit to blink, and we show this through the weights multiplied by the stimuli (in red).

Education complex behavior can be simplistically expressed as a gradual change in weight between connected neurons over time.

To train a rabbit, we repeat the steps:

For the first three attempts, the schemes will look like this:

Please note that the weight for the sound stimulus increases after each repetition (highlighted in red), this value is currently arbitrary - we chose 0.30, but the number can be anything, even negative. After the third repetition, you will not notice a change in the rabbit's behavior, but after the fourth repetition, something amazing will happen - the behavior will change.

We removed the air exposure, but the rabbit still blinks when it hears the beep! Our last diagram can explain this behavior:

We trained the rabbit to respond to sound by blinking.


In a real experiment of this kind, it may take more than 60 repetitions to achieve the result.

Now we will leave the biological world of the brain and rabbits and try to adapt everything that
learned to create an artificial neural network. First, let's try to do a simple task.

Let's say we have a machine with four buttons that dispenses food when the correct one is pressed
buttons (well, or energy if you are a robot). The task is to find out which button gives a reward:

We can depict (schematically) what a button does when clicked like this:

It's best to solve this problem entirely, so let's look at all the possible results, including the correct one:


Click on the 3rd button to get your dinner.

To reproduce a neural network in code, we first need to make a model or graph to which the network can be compared. Here is one graph suitable for the task; moreover, it well displays its biological analogue:

This neural network simply receives incoming information - in this case it would be the perception of which button was pressed. Next, the network replaces the input information with weights and makes an inference based on the addition of a layer. It sounds a little confusing, but let's see how the button is represented in our model:


Note that all weights are 0, so the neural network is like a baby, completely empty, but completely interconnected.

Thus, we match an external event with the input layer of the neural network and calculate the value at its output. It may or may not coincide with reality, but we will ignore this for now and begin to describe the problem in a way that a computer can understand. Let's start by entering the weights (we'll use JavaScript):

Var inputs = ; var weights = ; // For convenience, these vectors can be called
The next step is to create a function that takes the input values ​​and weights and calculates the output value:

Function evaluateNeuralNetwork(inputVector, weightVector)( var result = 0; inputVector.forEach(function(inputValue, weightIndex) ( layerValue = inputValue*weightVector; result += layerValue; )); return (result.toFixed(2)); ) / / May seem complex, but all it does is match weight/input pairs and add the result
As expected, if we run this code, we will get the same result as in our model or graph...

EvaluateNeuralNetwork(inputs, weights); // 0.00
Live example: Neural Net 001.

The next step in improving our neural network will be a way to check its own output or resulting values ​​in a way that is comparable to the real situation,
let's first encode this specific reality into a variable:

To detect inconsistencies (and how many there are), we'll add an error function:

Error = Reality - Neural Net Output
With it we can evaluate the performance of our neural network:

But more importantly, what about situations where reality produces a positive outcome?

Now we know that our neural network model is broken (and we know how much), great! What's great is that we can now use the error function to control our learning. But all this will make sense if we redefine the error function as follows:

Error = Desired Output- Neural Net Output
An elusive, but such an important discrepancy, silently showing that we will
use previous results to compare with future actions
(and for learning, as we will see later). This also exists in real life, full of
repeating patterns, so it can become an evolutionary strategy (well, in
most cases).

Var input = ; var weights = ; var desiredResult = 1;
And a new function:

Function evaluateNeuralNetError(desired,actual) ( return (desired - actual); ) // After evaluating both the Network and the Error we would get: // "Neural Net output: 0.00 Error: 1"
Live example: Neural Net 002.

Let's summarize. We started with a problem, made a simple model of it in the form of a biological neural network, and had a way to measure its performance compared to reality or the desired result. Now we need to find a way to correct the discrepancy, a process that for both computers and humans can be thought of as learning.

How to train a neural network?

The basis of training for both biological and artificial neural networks is repetition
And learning algorithms, so we will work with them separately. Let's start with
training algorithms.

In nature, learning algorithms refer to changes in physical or chemical
characteristics of neurons after experiments:

The dramatic illustration of how two neurons change over time in the code and our "learning algorithm" model means that we will simply change things over time to make our lives easier. So let's add a variable to indicate how easy life is:

Var learningRate = 0.20; // The larger the value, the faster the learning process will be :)
And what will this change?

This will change the weights (just like the rabbit!), especially the output weight we want to produce:

How to code such an algorithm is your choice, for simplicity I add the learning factor to the weight, here it is in the form of a function:

Function learn(inputVector, weightVector) ( weightVector.forEach(function(weight, index, weights) ( if (inputVector > 0) ( weights = weight + learningRate; ) )); )
When used, this training function will simply add our learning factor to the weight vector active neuron, before and after a round of training (or repetition), the results will be as follows:

// Original weight vector: // Neural Net output: 0.00 Error: 1 learn(input, weights); // New Weight vector: // Neural Net output: 0.20 Error: 0.8 // If it's not obvious, the neural net output is close to 1 (chicken output) - which is what we wanted, so we can conclude that we are moving towards in the right direction
Live example: Neural Net 003.

Okay, now that we're moving in the right direction, the last piece of this puzzle will be implementation repetitions.

It's not that hard, in nature we just do the same thing over and over again, but in code we just specify the number of times to repeat:

Var trials = 6;
And the implementation of the number of repetitions function into our training neural network will look like this:

Function train(trials) ( for (i = 0; i< trials; i++) { neuralNetResult = evaluateNeuralNetwork(input, weights); learn(input, weights); } }
Well, here's our final report:

Neural Net output: 0.00 Error: 1.00 Weight Vector: Neural Net output: 0.20 Error: 0.80 Weight Vector: Neural Net output: 0.40 Error: 0.60 Weight Vector: Neural Net output: 0.60 Error: 0.40 Weight Vector: Neural Net output: 0.80 Error : 0.20 Weight Vector: Neural Net output: 1.00 Error: 0.00 Weight Vector: // Chicken Dinner !
Live example: Neural Net 004.

Now we have a weight vector that will only produce one output (chicken for dinner) if the input vector corresponds to reality (clicking the third button).

So what's the coolest thing we just did?

In this particular case, our neural network (after training) can recognize the input data and say what will lead to the desired result (we will still need to program specific situations):

In addition, it is a scalable model, a toy and a tool for our learning. We were able to learn something new about machine learning, neural networks and artificial intelligence.

Warning to users:

  • There is no mechanism for storing learned weights, so this neural network will forget everything it knows. When updating or re-running the code, you need at least six successful iterations for the network to fully learn, if you think that a person or machine will press buttons at random... This will take some time.
  • Biological networks for learning important things have a learning rate of 1, so only one successful iteration would be needed.
  • There is a learning algorithm that closely resembles biological neurons, and it has a catchy name: Widroff-hoff rule, or Widroff-hoff training.
  • Neuronal thresholds (1 in our example) and retraining effects (with a large number of repetitions the result will be greater than 1) are not taken into account, but they are very important in nature and are responsible for large and complex blocks of behavioral reactions. So do negative weights.

Notes and list of references for further reading

I tried to avoid math and strict terms, but if you're interested, we built a perceptron, which is defined as a supervised learning algorithm (supervised learning) of dual classifiers - heavy stuff.

The biological structure of the brain is not a simple topic, partly because of imprecision and partly because of its complexity. It's better to start with Neuroscience (Purves) and Cognitive Neuroscience (Gazzaniga). I've modified and adapted the rabbit example from Gateway to Memory (Gluck), which is also a great introduction to the world of graphs.

An Introduction to Neural Networks (Gurney) is another great resource for all your AI needs.

And now on Python! Thanks to Ilya Andshmidt for providing the Python version:

Inputs = weights = desired_result = 1 learning_rate = 0.2 trials = 6 def evaluate_neural_network(input_array, weight_array): result = 0 for i in range(len(input_array)): layer_value = input_array[i] * weight_array[i] result += layer_value print("evaluate_neural_network: " + str(result)) print("weights: " + str(weights)) return result def evaluate_error(desired, actual): error = desired - actual print("evaluate_error: " + str(error) ) return error def learn(input_array, weight_array): print("learning...") for i in range(len(input_array)): if input_array[i] > 0: weight_array[i] += learning_rate def train(trials ): for i in range(trials): neural_net_result = evaluate_neural_network(inputs, weights) learn(inputs, weights) train(trials)
And now on GO! Thanks to Kieran Maher for this version.

Package main import ("fmt" "math") func main() ( fmt.Println("Creating inputs and weights ...") inputs:= float64(0.00, 0.00, 1.00, 0.00) weights:= float64(0.00, 0.00, 0.00, 0.00) desired:= 1.00 learningRate:= 0.20 trials:= 6 train(trials, inputs, weights, desired, learningRate) ) func train(trials int, inputs float64, weights float64, desired float64, learningRate float64) ( for i:= 1;< trials; i++ { weights = learn(inputs, weights, learningRate) output:= evaluate(inputs, weights) errorResult:= evaluateError(desired, output) fmt.Print("Output: ") fmt.Print(math.Round(output*100) / 100) fmt.Print("\nError: ") fmt.Print(math.Round(errorResult*100) / 100) fmt.Print("\n\n") } } func learn(inputVector float64, weightVector float64, learningRate float64) float64 { for index, inputValue:= range inputVector { if inputValue >0.00 ( weightVector = weightVector + learningRate ) return weightVector ) func evaluate(inputVector float64, weightVector float64) float64 ( result:= 0.00 for index, inputValue:= range inputVector ( layerValue:= inputValue * weightVector result = result + layerValue ) return result ) func evaluateError(desired float64, actual float64) float64 ( return desired - actual )

You can help and transfer some funds for the development of the site