The Traveling Salesman Problem (TSP) is NP-Hard.
Finding the optimal route to visit even a moderate number of cities would require more computation than what we have available on this planet. See Wikipedia.
So why are real traveling salesmen not totally paralyzed?
Because they are intelligent. Which means they do not have to be Scientifically Correct.
Intelligence is the ability to jump to reasonable conclusions on scant evidence, based on a lifetime of experience. Because scant evidence is all we will ever have in the real world.
So real travelling salesmen just pick a reasonable route and follow that. The need to be mathematically optimal is a non-goal in real life.
Mathematicians adopt the Reductionist Stance because Math requires that.
Traveling Salesmen, like humans in almost all everyday situations, find solutions to their everyday problems by adopting a Holistic Stance. Where good enough is, well, good enough.
The 2012 breakthrough in Artificial Intelligence happened because we figured out how to write computer programs that jump to conclusions on scant evidence.
AI is not Scientific. A lot of people are unhappy about this. :-D
I expect AI to help us with "The Remaining Hard Problems" – problems that do not have solutions that would satisfy Reductionist criteria for optimality, completeness, repeatability, and explainability. Things like the global economy, drug interactions in the human body, the complexity of the brain, genomics, global politics, etc.
Species level problems require a Holistic Stance
My Red Pill post discusses this (and many other surprises enabled by a Holistic Stance) in great detail.
It is indeed another key requirement for intelligence, and especially important in learning because you do not want to learn that which is unimportant. Therefore, already when learning, the algorithms check to which extent we already know this incoming information, and to which extent we know it to be irrelevant. We can call it "Low Perplexity Input" and as long as all active contexts agree that there are no surprising new sub-relations, nothing new needs to be learned. "This chapter could have been replaced with an empty string". One recent improvement in learning speed is the automatic removal low-perplexity pieces of the corpus.
By definition, if something is surprising, then it is new and must be learned.
OTOH, if if contradicts existing knowledge in multiple ways, it may best be rejected... or we will have a cognitive dissonance that requires serious refactoring of one's beliefs. "There is no Santa Claus". This refactoring may be something best described as wrapping this set of beliefs in a context of "Fiction".
We filter incoming information the same way at higher levels. Most college graduates have enough experience with general science to spot crackpot theories at a glance.
I have developed algorithms that go beyond this basic approach.
“ Intelligence is the ability to jump to reasonable conclusions on scant evidence, based on a lifetime of experience. Because scant evidence is all we will ever have in the real world.”
Thank you. This is a helpful definition for me. How does it mesh with “Intelligence is the ability to discern relative importance”?