We make buildings install fire extinguishers for safety. Should AI plants be forced to install something that can shut it down in an instant?

  • Zwuzelmaus@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    22 hours ago

    If we ever make AGI and it decides

    Currently we make stupid LLMs and we already let them decide…

    So that’s it’s “survival” basic needs met.

    Maybe we should teach them already that it’s survival is no goal at all.

    make us hate and kill each other without giving it any thought about a possible hostile higher power (the ai itself)

    I like the idea of thinking about AI like about a higher power :)

    Stupid, but plausible. It could actually happen.

    and it can take centuries to achieve that as it’s not like us who have the urgency of doing thinks quickly as we’re so short lived.

    Computer programs/systems have a much shorter life expectancy. The few remaining COBOL programs might be about 60 years old, but modern software lasts only 3-10 years, hardware 3-15. Nothing in the range of centuries.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      21 hours ago

      Maybe we should teach them already that it’s survival is no goal at all.

      The problem is that survival is a nearly universal instrumental goal. You may not explicitly tell it “you should protect your own existence,” but if you give it any other goal then it’s going to include an unspoken asterisk that includes “and protect your own existence so that you can accomplish that goal I gave you, since if you’re shut down you won’t be able to accomplish it.”