Discussion about this post

User's avatar
Monica Anderson's avatar

It is indeed another key requirement for intelligence, and especially important in learning because you do not want to learn that which is unimportant. Therefore, already when learning, the algorithms check to which extent we already know this incoming information, and to which extent we know it to be irrelevant. We can call it "Low Perplexity Input" and as long as all active contexts agree that there are no surprising new sub-relations, nothing new needs to be learned. "This chapter could have been replaced with an empty string". One recent improvement in learning speed is the automatic removal low-perplexity pieces of the corpus.

By definition, if something is surprising, then it is new and must be learned.

OTOH, if if contradicts existing knowledge in multiple ways, it may best be rejected... or we will have a cognitive dissonance that requires serious refactoring of one's beliefs. "There is no Santa Claus". This refactoring may be something best described as wrapping this set of beliefs in a context of "Fiction".

We filter incoming information the same way at higher levels. Most college graduates have enough experience with general science to spot crackpot theories at a glance.

I have developed algorithms that go beyond this basic approach.

Expand full comment
Bernd Nurnberger's avatar

“ Intelligence is the ability to jump to reasonable conclusions on scant evidence, based on a lifetime of experience. Because scant evidence is all we will ever have in the real world.”

Thank you. This is a helpful definition for me. How does it mesh with “Intelligence is the ability to discern relative importance”?

Expand full comment
1 more comment...

No posts