• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    12 hours ago

    ELIZA worked just as well as long as you had enough IF-THEN statements to cover every possible turn of a conversation. But you can’t, and just like ELIZA but in a more complex way, LLMs will miss an unexpected turn if they can’t predict it. And just about every LLM I’ve seen will end up agreeing with you, which is not at all great for someone getting mental health.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      Yeah. All the API-only big models are now and forever dead in the water for this, no matter how much they improve. The sycophancy is unreal.