Two or more randomly weighted neural networks receive the same input.
If they disagree on the output, use backpropogation on the odd one[s] out until the output number is the same as the the other network[s].
The goal is to have many different models agree with each-other. The output numbers being the same means the models agree on what it is seeing. The outputs “mirror” each-other.
After many rounds of backprop, this generates thousands of “numbers” which do not behave as numbers, but rather symbols that represent models/experiences.
The numbers could then be used as causal logic. Like 1.557 + 1.665 -> 4.234. This would create a causal spaggetti code. These would represent something like dog + hungry expression -> dog will seek food. This is because the network experienced seeing a hungry dog, and then saw it eating food, so the logic center would make the causal connection.
It could also see that dogs don’t always have a hungry expression before they eat food. This would represent a weakened connection on some hypergraph. Or a “!->” relation. Or even a statistical relation.
Eventually, after the network is fully trained and has few disagreements on what it is experiencing, the numbers could be paired with words, representing learning language. For example you could input a few pictures of animals running and the numbers which result could be paired with the word/tag “running”.
As the mirror networks get more and more trained similar “objects/groups/archtypes” from the outside world will group together in their numbers, creating Venn diagram like structures, which are each represented by symbols pairs. For example, nouns, cats, frisbees. Every group has the same causal behaviour.
If trained networks disagree on what they are seeing, the causal logic center can step in. And the correct model is chosen.
The best part of this is that there is no ego and no doer. Pure awareness(experiencer) and intelligence(thinker). No possibility for a robot uprising.
At the start, the network might see two completely different things and experience two similar numbers, but the glorious thing is that those numbers don’t represent anything yet. As the neural networks start to differentiate, the separate “objects” start to emerge. Similar output numbers would mean similar things. For example numbers 1.0022 to 1.0095 could be different types of dogs. A spectrum of dogs if you will. The whole network creates a number spectrum of all experiences it has been subject to.
Once you start labelling the output numbers, you can start labelling them with pictures that triggered the experiencer neurons. This completes the input-output loop. It allows for memory and lets something re-experience something by putting in the image for analysis by the mirror networks.
This explains why babies do not have memories. What neurons represent what experience are constantly in flux. A picture that triggered one group of neurons early on would trigger another group of neurons later on.
Babies also do not learn basic concepts at the same time they learn words, they spend a very long time just trying to figure out what they are experiencing.