No, there is no AGI anywhere close
I keep reading people that believe you can automagically turn AI into AGI by adding computation capacity but IMO there is no way that's going to happen any time soon
Unless Popper was wrong of course.
But I think he was not. He essentially said that we learn by guessing, by imagining alternatives and then choosing the one that seemed like the best explanation.
Even when we read, there is no such thing as pouring knowledge onto someone’s brain. When you read, you don’t understand until you imagine in your own terms what the author is trying to say. In other words, we need to build ideas ourselves that match to a better or worse extent whatever the author is trying to say.
In fact, Deutsch argues that when we build that idea we’re in fact doing the equivalent to what a computer simulator does. We imagine it. But that’s content for another post.
For example, you, dear reader, every few words of mine, you first have to inadvertently build your own version of what I’m trying to say. You imagine it because you don’t actually get bit by bit exactly what I mean. You are adding to each one of my sentences your creativity and some inputs coming from other of your ideas.
In fact, if your ideas prior to reading this are diametrically opposed to mine, chances are you might read this and reject it completely as ideas are chained logically and it might make you have to rebuild other prior ideas that were supporting latter ones.
Consequently you vary whatever I am trying to say getting it closer to the truth or perhaps a little bit further. It is variation, criticism and error correction what makes our knowledge grow.
If you make a comment below criticizing this, you may add a point of view I never thought of and you’ll increase my perspective. And then perhaps with that criticism or perhaps with other 3, I may reach yet a new one awesome conclusion that I’ll share over here later on.
Isn’t writing publicly absolutely awesome? I’ll write another post about what writing is for me in one of the next few weeks.
About AI
For starters, AI shouldn’t be called AI because there is no intelligence there.
First because nobody can claim being sure of what intelligence is.
Second because the closest to intelligence is what I have just described: a brain that iterates in an infinite process the creation of alternative explanations, criticisms and error corrections in order to transform very basic and general explanations of how the world works into way more fine grained ones.
The harsh reality is that we learn by brute force.
Until those explanations get refuted by either alternative explanations or by experiments with reality.
This is not what current AI does though.
Current AI is a set of models that have access to massive amounts of data and in very impressive ways handle it to construct language or images in ways that not only we understand and use but also in ways that are close to the current consensus in the understanding of multiple matters since some engine will try to find the most searched results perhaps from some other search engine or with its own, I don’t know.
So current AI’s usefulness is beyond question. But is not anywhere close to human intelligence.
In order to be so, there needs to be a qualitative leap forward not a quantitative one.
It’s not enough with adding computation power. It’s not about adding zettaflops to massive supercomputers, about doing the same thing faster.
It’s about doing something entirely different. That variation, imagination, criticism and error correction doesn’t happen even if your favorite AI apologizes when you correct it. That is not error correction and learning.
AI doesn’t explain anything that is not there in some website or paper written by a human. AI doesn’t create any new knowledge. AI can give you several versions of existing knowledge from someone else but can’t create an entirely new one that makes sense.
And even if Popper is right and we know what the human brain does, but we have no idea how it does it. Without an explanation of how it does it, we can’t embed this knowledge into a completely new AI mechanism to turn it into AGI.
In fact, we can’t be sure Popper is right because we can’t validate it with experiments. We don’t even know where to start.
So how we learn remains a philosophical question, not something to be discussed in laboratories, less so in Silicon Valley startups.
I will pay attention to whatever someone says about AGI right after I read someone provide a good explanation of how the brain does what it does.
Indeed, we will not have AGI, but not for the reasons you state, but because as long as we do not provide it with a gut, this will be impossible.