Humans suck at almost everything. Yann LeCun recently pointed this out.
And an infant sucks at everything (joke opportunity ignored) and a 4-year old knows some language but can't handle working at a factory. And if someone doesn’t know Math, they can't be a Math teacher.
Queue up the scene in "I, Robot" where Will Smith's character asks "Can a robot compose a symphony? Can a robot turn a canvas into a beautiful masterpiece?" and the robot answers "Can you?"
AIs can learn language and a couple problem domains that we want to use them for. Today. But if we want an AI to approve building permits at the local city offices, there's no need for it to know how to cook. Or make coffee. Because then it would need arms and legs.
Just because we only teach them enough to handle only ONE job does not mean they are "narrow AI". Because the only kind of AI that is truly general is one that is capable of learning anything.
Insisting on bots and AIs to learn everything any human knows, already at the factory, is a waste of both memory and learning time. And it's a 20th Century idea based on then-future AIs that are programmed, not learned. "We have to have AIs to program AIs – how can they otherwise know everything?" was the dominant 20th Century paradigm in the minds of my Reductionist competitors in the AI field as late as 2015.
These diehards had to be brought to Deep Neural Networks and Machine Learning kicking and screaming. And they still keep pushing their old "AGI" idea. They can read manuals and API specs and see how LLMs work but they will never understand why they work until they switch to a Holistic Stance.
And if we listen to the people that invented the term “AGI” today, there is still an undercurrent of them not believing what they say when they say they understand LLMs, ML, and DL. They are all still Reductionists.
Hey, take the Red Pill, y'all.
There are useful and useless definitions of the term "AGI". Coincidentally, by all useful definitions of "AGI", we already have it.
"AGI is the ability to learn any problem domain" is useful because it points to Machine Learning as the path forward.
”An AGI knows how to do anything as well as a human” is not useful. Because humans are different and have learned different skills.
The problem is that the useless (and frankly, stupid) definitions generate more clicks among outsiders.
I myself don't compare AI competence to human competence. Competence is multi-faceted and attempts to reduce competences to a single measurable quantity, like “IQ”, have been deprecated for decades.
Many AI enthusiasts who learned about "AGI" in the 20th Century, seem to want something scary to talk about as a threat. And some want it as a criterion for success, so we know when we are done. Done? Huh?.
But "AGI" isn't going to spring forth on a specific future date. Our LLMs will gradually improve and become better and better at the tasks that we want them to perform for us. And there is no reason to require that they know more than a few tasks beyond understanding language, but we have already achieved that. Hence my statement that a Building-permit-approval-AI doesn't need to know how to cook; it would be a waste of memory and learning time. Generality isn't about learning EVERYTHING at the factory.
The only General AI is one capable of learning anything. Some information they will require in operation may not even have even been known by the time they were built and learned language. Humans on most new jobs have to learn the job when they start, because they didn't learn it "at the factory", in school.
All talk about "When will we achieve AGI" is noise, perpetuated by press, influencers, and old-time fans of Reductionist AI. Because it generates clicks.
To summarize, we need, and already have, Artificial General Learners, "capable of learning any problem domain". I tend to favor definitions that provide implementation hints. "AGI" is useless, opaque and nebulous, whereas "AGL" says we need Machine Learning.
Human are not "General Intelligences" at birth. But we are General Learners.
I like the framing that AGI is already here in the sense that we have learners that can master any domain. It's only the notion that everything needs to be present in a single entity that isn't satisfied right now and that feature is of questionable utility.