Correctness

■ Comparison of AIXI and UAGIS

AIXI is a research similar to UAGIS.
AIXI is also a top-down approach to mathematically defining what artificial general intelligence is.
It's the same as UAGIS that aims to maximize rewards.
The big difference is the concept of good or bad inductive inference.
AIXI thinks that the shorter the program, the more plausible it is.
It is based on a philosophical principle called "Occam's Razor".
"Occam's Razor" is a guideline that states that "to explain something, one should not assume more than is necessary".
AIXI believes that a 'short program' is more likely to be a good program than a 'long program'.
However, a "program with a high probability of being good" is not necessarily the "best program".
However, if you can only know whether a program is good or bad by probabilities, the program with the highest probability is the best choice.
AIXI is correct under the assumption that the quality of a program can be judged only by the length of the program.
However, shorter programs are not necessarily better.
The shortest program is one that ignores all input and just outputs randomly.
Everything in this world is a random result, and everything that seems regular can be interpreted as a coincidence.
Some people find it funny, but this is a matter of interpretation and nothing is wrong.
A program that just prints random output will feel like a "bad" program.
A random output program would have felt "bad" regardless of the length of the program.
In other words, there are criteria other than the length of the program that determine whether the inductive inference is good or bad.
UAGIS judges good or bad based on the quality (bias) and quantity of evidence used for inductive inference.
Neither AIXI nor UAGIS is right and the other is wrong.
I just adopted a different definition of the correctness of the inductive inference.

■Definition of "correctness" in inductive inference

In order to create a program that answers the "correct answer", it is necessary to define the "correct answer".
No assumptions are necessary to show the correctness of deductive inference.
The correctness of inductive inference needs to be defined.
AIXI is defined by program length.
UAGIS is defined by the quality and quantity of evidence.
Another example of correctness is the "Turing test".
The "Turing test" is easy to pass if you take it as a baby.
The "Turing test" is based on the obvious idea that if they are indistinguishable they are the same.
However, it is a clear fallacy to assume that they are the same in content just because they are visually distinguishable.
In modern science, the prevailing view is that only results have value.
The good or bad performance of artificial intelligence is evaluated by good or bad test results.
In other words, without looking at the test results, you can't judge good or bad.
Since we only look at the results, we cannot judge whether the results were good by chance.
Even if it weren't a coincidence, I don't know if it holds true only for that test.
It is a leap of logic to judge "general purpose" or "universal" based only on the behavior in a specific situation of "test result".

■The correctness of the policy to imitate the brain

In the visual sense of the brain, even the same color creates an illusion that it feels brighter when the surroundings are dark.
In addition, various errors called "cognitive biases" occur.
If you want to reproduce such behavior, imitate the brain.
But if we want to build an artificial general intelligence that doesn't make such mistakes, it's not enough to just look at the brain.
Of course, it is possible to define the state that causes the same illusion as the brain as "correct" and aim for that.
But brain behavior can only be learned through observation.
We can observe how the brain behaves in certain situations, but to examine all situations, we need to observe them all.
Since it is impossible to check everything, it is necessary to generalize by inductive inference somewhere.
In other words, it is necessary to define what "correctness" and "intelligence" are.
You may or may not refer to your brain to determine what the "correct answer" is.
You can decide "correctness" based on what kind of answer you want.
In trying to define what ``intelligence'' is, ``brain'' is a good reference, but there is something even better.
To clarify "intelligence", we should observe "intelligence" rather than observing "brain".
For example, if you want to know what an app does, it's better to launch it and watch it than to read the machine code.
If you want to know about algorithms, it's effective to study "brain" and "machine language", but it's difficult if you don't know the "purpose" first.
It is difficult to understand the "purpose" of "brain" and "machine language" unless you have read them all.
If the "purpose" is known in advance, it is possible to infer what kind of processing is performed for the "purpose" by knowing only a part of the "brain" and "machine language".
It is impossible to create AGI by imitating the brain when the understanding of the brain is half-finished, unless a great coincidence overlaps.
At some stage, if you decide what "intelligence" is, you can move on.
Modern attempts at brain-mimicking artificial general intelligence aren't headed in the wrong direction, they're still unsettled.
Those who believe that artificial general intelligence can be completed simply by imitating the brain do not realize that there is no set direction.
It is difficult to reach the goal with random walk.
In modern society, in order to obtain research funds, easy-to-understand "test results" are required, but what is really needed is the elucidation of "intelligence."

■References

Universal Artificial Intelligence
Theoretical framework of general-purpose agents : Introduction to AIXI advocated by Marcus Hutter ( Invitation to Artificial General Intelligence (AGI))