Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    26 days ago

    Damn, we’re so easy to manipulate.

    Do you and yours a big favor and stay away from that shit like it’s heroin.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      26 days ago

      I use it, but have established a realistic mindset that it’s alwqys confidentially incorrect and in many cases I’m better off walking away and just doing the thing myself.

      In saying that, I’ve also established a mindset that people who actively rely on genAI must be low on intelligence. Not only lacking in knowledge or pursuing knowledge of whatever they’re using it for, but genuinely of a mental calibre that is unable to discern or realise its low performance.

      • nightshade@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        26 days ago

        Someone here pointed out the error of the old “even a broken clock is right twice a day” cliche. If you have to independently check if it’s correct, then it’s not giving you any useful information.

      • Mirror Giraffe@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        26 days ago

        I gave mine rules to always question me and provide critical feedback. It’s quite annoying sometimes but much better than when I was a genius for just about anything.

        • teft@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          26 days ago

          I watched an interview with Hannah Fry a few weeks ago and she said that is how she prompts the LLMs she uses.