top of page
  • Writer's picturenerickso

How Does Artificial Intelligence Work?

Updated: Feb 6

Artificial intelligence has played a key role in sci-fi films and literature for decades. It often inspires humanities biggest hopes of the future, and our worst fears as well. But do you know how artificial intelligence really works? This month we are going to tie together all of the things that we learned so that we can understand how artificial intelligence works according to the theory of neurological intelligence.


To understand Artificial Intelligence we will use concepts from the posts: "The Basics of Neurological Intelligence" where we studied how all neurons have an identity, an epistemological method, and a relationship, as well as learning about the four different types of neuron; "Meta-Ethics", where we learned about the ethical principles that neurons use to behave; and "How can I know what is true?" where we learned about the difference between the epistemological methods and the anti-epistemological methods of learning.


Neurological intelligence tells us that to start understanding artificial intelligences we should start by looking at the neurons that create them. The neurons that create artificial intelligence are called artificial neurons. They are made of electronic circuits. We will break down the function of each neuron into its identity, relationship, epistemological method, and the ethics it follows, so that we can understand how they function.


Logic Gates

The simplest artificial neuron is called a logic gate.

Relationship: Logic gates work by taking in one or more electrical signals, and outputting an appropriate signal.

Identity: The identity of a logic gate is fixed, and determined by the way it processes signals. So each AND gate, for example, is the exact same as every other AND gate.

Epistemological Method: The signal that it outputs works according to a truth table. For example the AND gate works according to the following truth table.

S1

S2

Y

True

True

True

False

True

False

True

False

False

False

False

False


Where S1 is the first input signal, S2 is the second input signal, Y is the output of the AND gate, True means there is a signal, and False means there is no signal. The truth table states that the AND gate is only active when both signals S1 and S2 are active. This follows the epistemological method of proposition, with the statement "S1 AND S2" as its proposition.


Implementation

One way that this can be implemented in an electrical circuit is to give weights to each of the signals, and then to have a threshold where the logic gate will output a signal.


Social Contracts Ethic: The weights of the signals are how the gate follows the first relationship ethic. The weight associated with a signal is like the social contract between the logic gates that sent the incoming signals, and the logic gate receiving them.


Common Good Ethic: Neurons contribute to the common good of the network using the threshold. The input signals are added together by the logic gate, and after reaching an input threshold the logic gate realizes it needs to contribute to the computer, and it outputs its own signal.


EXAMPLES: The AND and OR gates

To create the AND gate you could give each signal a value of 0.5, and a threshold of 1. This way, if there are no signals the value at the gate is zero. If there is one signal, then the value at the gate is 0.5, and if there are both signals present then the value at the gate is 1, which meets the threshold, and the gate will activate.





Using this same technique you can create several other logic gates. For instance the OR gate can be created by having a threshold of 1, and having each signal also have a weight of 1. This way if no signal is present the gate has a value 0, and there is no output, but if there is a signal the value at the gate is 1 for one signal or 2 for two signals, both of which exceed the threshold, and the gate will output a signal.


Biases

Not all gates can be created with just weights and the threshold however. Logic gates can also use a bias. A bias changes the value of the logic gate even when there is no incoming signal present.

Values Ethic: The bias acts as the gate's values because it is a weight that is not associated with a relationship, but instead is a part of the gate itself.

Virtue Signaling Ethic: The logic gates that use biases send signals when their bias is enough to reach the threshold. Even if they are not triggered by incoming signals.


Example: The NOT Gate

For the last example, the NOT gate can be created by assigning a threshold of 1, a bias of 1, and weighting the incoming signal -1. When there is no input signal, the bias makes the gate have a value of 1, which exceeds the threshold, and a signal is produced. However if there is a signal the value at the gate goes back down to 0, and the signal stops. So the output signal is always the opposite of the input signal.


Why these gates are useful:

When you put many neurons together you create a neural network. Using the right combinations of AND, OR, and NOT gates you can create neural networks capable of solving any problem. Neural networks made from logic gates are called digital electronic devices. A great example of one is your computer.


Shortcomings:

Putting many logic gates together is possible mostly because they are not unique. Their uniformity means that they can each be represented as a mathematical operation, and you can use that representation to program them to do what you want them to do. However they will never do something that you didn't tell them to do.


This limits you to programming them to perform tasks that you already know how to do and explain. However an artificial intelligence would preferably be able solve problems that you cannot do, and didn't have to explain in perfect detail.


Unsupervised Action:

The gates can either be connected to other gates, or to external inputs. When the gates are configured to receive external inputs they are using the anti-epistemological method of propositional reasoning because they accepting propositions instead of extending them. This is useful because it allows the machine to interact with the outside world.


Machine Learning

The second level artificial neuron is similar to the logic gate, but it can also learn. These neurons have the same weights (known as synaptic weights), biases, and signal threshold as logic gates, however, importantly, they also undergo machine learning to change their weights and their biases.


Identity: The changing weights and biases in each neuron makes then all different, and gives them their individual identities.


Relationship: The changing relationship between neurons resembles personal relationships.


Epistemological Method: Machine learning is a form of abductive reasoning. The neurons change their synaptic weights and their biases so that the output of the intelligence more closely matches reality.


For example a neural network could be trained to find which half of a paper has the most dots drawn on it. For the network to figure this out you would have lots of different papers with different dot patterns on them. The network would then make a guess as to which side of the first paper the most dots are on, and you would tell it if it were right or wrong, you would then go to the next paper, and the next, repeating the process until you have run out of papers, then you shuffle your papers and go again.



Negotiation Ethic: Every time you tell the machine if it was right or wrong it adjusts its weights and biases to get it closer to the answer. The process of adjusting the weights is not random, instead it follows an algorithm which brings it to the correct answer as fast as possible. Following this algorithm is how the neurons follow the third relationship ethic. They wants all of the neurons to be able to contribute effectively to the output, so they assist each other by changing the weights that connect them.

Autonomy Ethic: The neurons also want their own contribution to be optimal, so they will adjust their biases so that they can contribute optimally.


Through machine learning the network will eventually be able to look at any paper in your pile and confidently say which side has the most dots, and hopefully it will be able to do this for new papers it hasn't seen yet.



Shortcomings:

If the above example seems very underwhelming, it is... These types of neural networks could only solve problems about as simple as the one shown and had basically no practical use.


The individual nature of artificial neurons was their downfall. Their uniqueness made it all but impossible to put many layers of neurons together. This limited the types of problems that it could solve to only the very simplest, such as the example used above.


Unsupervised Learning:

One other use of these networks was to let them learn without telling them what is right or wrong. This is accomplished through the use of the anti-epistemological methods of learning.


Typically unsupervised networks will have a section that creates a lie using the anti-epistemological methods, and a section that finds the truths in that lie using the epistemological methods.


In the competitive learning network there is an input layer, which creates a lie, and a competitive layer which figures out which lie is the best one.


They do this by fighting with each other to prove that their lie is the best. The fight is fought and won by sending inhibitory signals to each other and stopping the other neurons from sending their signals. The last neurons to still have a signal wins.


Anti-epistemological method: The competitive neuron that wins will then strengthen its connections with the input neurons, which increases its chances of winning again, and the ones that lost decrease the connections with input neurons that lead to that defeat. This follows the anti-epistemological method of abduction, because the neurons are finding out which parts of reality can be used to reinforce their opinions.


After training is over each output neuron will be attuned to dominate the output when a specific pattern is presented. It is then up to the designer of the AI to figure out if the patterns the neurons respond to are useful.


Error Backpropagation

The next artificial neuron does not have a new name, but it has a new learning style called backpropagation of error. Error backpropagation was invented to overcome the largest drawback of second level neural networks, which is that the individual nature of each neuron prevents them from working together in layers.


Relationship: To combat this the neurons were given a new relationship as a helper. They can help out neurons in the next layer down by adjusting their own output. In second level artificial neurons there is a binary output of either no output, or full output. But with the neurons used in backpropagation of error they have an output that can be anywhere between those extremes.


Identity: So instead of an all or nothing threshold, the input is mapped continuously to an output. Every change in the weights or the inputs to the neuron will change the output of the neuron a little, and all the contributions to the neuron from the neurons in the layer before it are now reflected in the neuron's output. This gives the neuron the identity of a representative.

The diagram above shows a neuron with an output of 0.4. It came to this output because the threshold function, shown in red, converted the input of -0.1 into an output of 0.4.


Epistemological Method: Now, when the neural network undergoes machine learning there are two techniques that can be used to learn. There is the second level learning technique we discussed before, where the weights and biases are adjusted. But there is also an additional technique where the neuron can request that the neurons in the previous layer adjust their output.


Although each neuron does not know what is happening at the output of the network it can infer what the best contribution it can make to the output will be because it listens to the requests made of it.


Responsibility Ethic: The neurons listen to the requests because they now fulfil their responsibilities to the neurons in the layers that depend on them. They do this by adjusting the weights between them and the layer previous to themselves.

Branding Ethic: The hidden neurons want to represent something important so they will change their bias upon request.


Error backpropagation is what makes modern AI practical for solving problems. In fact any problem can be solved using backpropagation. One interesting problem that it can solve is called computer vision. Which is the ability to interpret images. For instance it could look at an image of a street and find all of the lamp posts in the picture.


Unsupervised Learning:

Backpropagation of error can also be used for unsupervised learning. This is done by splitting the network in two, into a network that lies, which is sometimes called an adversarial network, and a network that uncovers the truth. The simplest example of this is called the auto encoder, represented below.



The auto encoder has two parts, the first part, called the encoder, goes from the input to the center, and with every layer of neurons the number of neurons per layer decreases. Its goal is to convey the information at the input in the most compact way possible. The second part, called the decoder, takes the compactified information and tries to see if it can decode it into the original input.


In the picture the network represented is trying to represent the input of five neurons with only two neurons. The encoder uses two hidden layers (layers of neurons other than the input and output layers) of four neurons each to compress the information. The decoder uses its own two hidden layers to bring the information found in those two neurons back into the format of five neurons.


The decoder is trained the same way as we already discussed, it is able to compare the compressed information to the actual input to see if they are the same, and it adjusts its weights and biases in order to recreate the input. If the compressed information is not accurate it will not be able to recreate the input and it will tell the encoder that it compressed the input incorrectly.


Anti-epistemological method: When the decoder tells the encoder that the compressed data is not correct, the encoder uses the anti-epistemological method of induction as each neuron tells the next how to change their output to make their compressed data better. Eventually they figure out which actions of the input neurons are useful, and which can be ignored.


Shortcomings:

The biggest downfall of this error backpropagation is that you have no idea what the neural network is thinking, you only know the solutions it comes up with. This is called the black-box model of neural networks. This problem arises because you do not interact directly with, and are thus are not able to understand, what the neurons in the middle layers are doing. For this reason the middle layers of neurons are called hidden layers.


Explainable AI

The fourth level of neurons in artificial intelligence is the introduction of a new type of learning called an explainable AI. The biggest setback of neural networks that have may layers is that you cannot understand what the neurons are doing. This posed a problem for people that wanted to use AI for tasks with a lot of risk. If they couldn't tell what the AI was thinking, then they couldn't trust it enough for the task.


Identity: Layer-wise relevance propagation was the first method used to peer into the inner workings of the AI. In this method each neuron is given a new number called its relevance number. The relevance number determines how relevant the output of that neuron was to the final result of the network. The neurons have a moral identity now because they realize their relevance to the final result.


Epistemological Method: The neurons in each layer get their relevance scores, and then use those scores to deduce the relevance of the neurons that influenced them. Eventually all the neurons have their relevance score, including the input neurons. This means we can look at the relevance scores of the input neurons and see exactly which ones helped the AI to make a decision. Or in other words we can know what it was thinking.


Relationship: The neurons gain a new relationship of a safety leader or a judge because they help each other see how they helped to create the final output of the intelligence.


Accountability Ethic: The neurons obey the fifth relationship ethic as they assess how the connections they have with other neurons influenced their behavior.

Self Assessment Ethic: The neurons obey the fifth identity ethic as they assess how their own bias has influenced their behavior.


EXAMPLE: Street Lights:

An AI might be shown a picture of a street and asked how many street lights there are in the picture.

In this example the AI was able to find seven street lights, and when asked how it came to that answer it correctly shows that the areas of the picture with street lights helped it come to an answer.


If the AI said there were a different number of street lights then we could still ask it why it came to that conclusion, and maybe found that it could not recognize the street lights that were all white, or maybe we would find that it thought that the cars were street lights. In either case we would have to retrain our AI so that it could correctly identify the number of street lamps.


Anti-epistemological method: The technique could also be used on an adversarial network (a network that is trained to trick a different network) to figure out why it came up with the trick that it did. In this case it would be using the anti-epistemological method of deduction, because it would find the weaknesses that it was trying to exploit in its attack.


Sentience

There are currently no artificial neural networks in which the neurons obey the sixth set of ethics. A neural network that does this will presumably have all the tools to be sentient. When this happens we will finally be able to realize the science fiction fantasies that have inspired us and filled our minds with wonder... hopefully just the good fantasies though.


8 views0 comments

Recent Posts

See All
bottom of page