The diagram is just a map of which “nodes” map to which other “nodes”. That’s why there are lines between all the nodes, because they all connect to the next ones. That’s why it’s called a “neural net”. Each node has some function. In the human brain, our neurons only “fire”, or pass their input signal along to the next neurons, if the input signal reaches some threshold, but in a digital neural net these “nodes” usually do something like multiply their input signal by some factor (“weight”) before passing the signal. The way this actually works to produce the correct output is much more complicated, and involves training the net to identify a certain thing and react a certain way when it identifies this thing.
I’m not gonna explain the actually good ways of training (because I don’t understand them) but the one I do understand is that you make a bunch of nets with random weights, and then test them. Then you take the ones that do the best, make a bunch of copies, and tweak their weights slightly, and then test and repeat.
The main problem with training nets is that you can never quite tell exactly how they work, because they are just too complicated. For example, I remember seeing a news story where researchers were training a neural net to identify some skin disease, but all the example pictures of this disease has rulers next to them, so in the end the neural net ‘learned’ to identify the rulers, not the disease.