• ByteJunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    Let me grab all your downvotes by making counterpoints to this article.

    I’m not saying that it’s not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that’s 100% spot on.

    But the news article is trying to offer an opinion as if it’s a scientific truth, and this is not acceptable either.

    The basis for the article is the supposed “cutting-edge research” that shows language is not the same as intelligence. The problem is that they’re referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.

    The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.

    The nature of human intelligence is a much debated topic, and this doesn’t particularly add to the existing theories.

    Even if we accept the authors’ views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.

    But the problem is that the Verge article then goes on to conclude the following:

    an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

    This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an “AI dumb” catchall that ignores even the most basic evidence that they themselves give - like being able to “solve” go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that “it will never be able to” in the future.

    Looking back at the last 2 years, I don’t think anyone can predict what AI research breakthroughs might happen in the next 2, let alone “forever”.