• Sal@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    People using chatbots as therapist have literally given themselves psychosis and even killed themselves. Chatbots should NEVER offer counseling, ever.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    Hopefully an infinite distance away. Put simply, we don’t have chatbots capable of what is required for counseling. We don’t have the technology for this, even assuming we could just pick and choose the positive aspects of what works from everything available to us right now, and somehow eliminate the downsides and added complexity of combining the various technology and systems.

    LLMs are amazing but are just word association engines. Sure, across an almost incomprehensible amount of dimensions, but they don’t have real understanding. Concepts of true/false. Even the ones created to cite sources and “reason” can easily be shown to not reliably do either of those things, citing non-existent sources and clearly constructing their “thoughts” post-hoc. And due to those foundational limitations inherent to their comstruction and function, they can’t be guided to stay within specific bounds in a coversation in any fully reliable way. They can’t be made to analyze someone over long conversations, map that to their issues, and then guide them to even a widely accepted standard for positive growth. They may be able to approximate that in an incredibly lossy, error prone, and non-deterministic way… but that isn’t good enough.

    Yes, there is potential for some combination of expert system, flowcharts, fuzzy logic, LLM, etc. But you can’t just slap them together and get something perfect with none of the flaws. Context switching between the underlying “engine” is a lossy, non-deterministic process that introduces more room for error. What’s most likely is you end up with something that has all the drawbacks of each underlying system/engine all piled together in an inconsistent non-deterministic mess of not reliably functioning.

    It’s not terribly difficult to understand the underlying functionality of these systems, and the drawbacks of them, from a 1000ft high conceptual level. It’s fucking infuriating the neverending tide of absolutely knowledgeless articles about this shit just because people can’t think past “it appears to be able to carry a conversation”.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      ELIZA worked just as well as long as you had enough IF-THEN statements to cover every possible turn of a conversation. But you can’t, and just like ELIZA but in a more complex way, LLMs will miss an unexpected turn if they can’t predict it. And just about every LLM I’ve seen will end up agreeing with you, which is not at all great for someone getting mental health.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 hours ago

        Yeah. All the API-only big models are now and forever dead in the water for this, no matter how much they improve. The sycophancy is unreal.