How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.
Because it simply isn’t, it isn’t aware of anything because such algorithm, if it can exist, hasn’t been created yet! It doesn’t “know” anything because the it we’re talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, you’d know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails… but that’s already the reality we live in and it’s easily checked! You’re thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically it’s just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. 🤷
A probabilistic “word calculator” is not an intelligent, conscious agent? Oh noes! 🙄😅
I’ll bite.
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.
If I can’t meet it and could only interact with it through a device, then I could be fooled, of course.
But then how can you tell that it’s not an actual conscious being?
This is the whole plot of so many sci-fi novels.
Because it simply isn’t, it isn’t aware of anything because such algorithm, if it can exist, hasn’t been created yet! It doesn’t “know” anything because the it we’re talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, you’d know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails… but that’s already the reality we live in and it’s easily checked! You’re thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically it’s just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. 🤷