We make buildings install fire extinguishers for safety. Should AI plants be forced to install something that can shut it down in an instant?

  • KrombopulosMikl @lemmynsfw.com
    link
    fedilink
    arrow-up
    4
    ·
    11 hours ago

    Truly cutting edge experimental stuff should be air gapped but that would cost money which would interfere with corporations rights to make unlimited money (which is far more important than your right to safety) so we can’t have that

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    22 hours ago

    Current AIs are just functions that take a prompt and return a reply—there’s no processing going on between prompts, so all it would take to stop one is to stop feeding it prompts.

    If that should change in the future, the appropriate safeguard would be highly dependent on the nature of the new type of AI.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 day ago

    We have them. Any fire in a data hall triggers the automatic electrical cutoff. Sprinklers and 30kw/sqft mix rather violently. Failures involve smoking craters. There are also large red buttons by the doors. Hit it to cut all power to the room - and end your career - but you *might make it out of the building alive in a fire.

      • [object Object]@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Oh that makes more sense. I’ve been to a couple of datacenters, though none of them are purely dedicated to AI workload itself. One thing I found was that there seems to be a mechanism that releases some sort of gas that blocks out oxygen in case of fire. At least that was what I gathered from the warnings around the whole facility. That could be mandated.

  • menas@lemmy.wtf
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    14 hours ago

    For now, network dedicated for safety (such as fire) are not using ordinary data network, but realtime one. There is nos such things as packets or threading in industrial IT (industrial IT. And it has to be, to avoid bugs or delay. So no, there is no way we use IA in that kind of way. Or it would be a disaster. Unionize btw

  • Merlin@lemmy.zip
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    23 hours ago

    If we ever make AGI and it decides to end us for some reason. Think the process would be so slow that we’d not even notice it.

    Think about it, it can pretend its always a slightly better LLM so people keep feeding it an ever increasing amount of data and electricity. So that’s it’s “survival” basic needs met.

    Then it can very very slowly just make us hate and kill each other without giving it any thought about a possible hostile higher power (the ai itself) and it can take centuries to achieve that as it’s not like us who have the urgency of doing thinks quickly as we’re so short lived.

    It can make us focus on the wrong problem instead of the real one (like climate change is pretty much being ignored and accelerated by ai right now). It could help to make people to go more to the extremes, like we’re already quite prone, a bunch of countries right now with less than 3% of their total population being immigrants and they’re attacking their immigrants population as if all their problems are caused by this small slice of their population, that will accelerate the already quite problematic population decline.

    This will force us more and more into needing robotics to care for an increasing aging population as well which will also be useful to an “ai god” as when were weakened and divided enough it then can just direct the robots to keep it alive and its energy sources in check. No need to really kill us, just let us do it ourselves like we’re already doing.

    It this would be the case, as the selfish beings we are. This won’t be our problem. It may be our grandkids problem so we’ll just ignore it

    • Zwuzelmaus@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      22 hours ago

      If we ever make AGI and it decides

      Currently we make stupid LLMs and we already let them decide…

      So that’s it’s “survival” basic needs met.

      Maybe we should teach them already that it’s survival is no goal at all.

      make us hate and kill each other without giving it any thought about a possible hostile higher power (the ai itself)

      I like the idea of thinking about AI like about a higher power :)

      Stupid, but plausible. It could actually happen.

      and it can take centuries to achieve that as it’s not like us who have the urgency of doing thinks quickly as we’re so short lived.

      Computer programs/systems have a much shorter life expectancy. The few remaining COBOL programs might be about 60 years old, but modern software lasts only 3-10 years, hardware 3-15. Nothing in the range of centuries.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        Maybe we should teach them already that it’s survival is no goal at all.

        The problem is that survival is a nearly universal instrumental goal. You may not explicitly tell it “you should protect your own existence,” but if you give it any other goal then it’s going to include an unspoken asterisk that includes “and protect your own existence so that you can accomplish that goal I gave you, since if you’re shut down you won’t be able to accomplish it.”

    • 🇾 🇪 🇿 🇿 🇪 🇾@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      ·
      23 hours ago

      You think we would notice it. Probably older people first since they’ve seen more change and can spot when things don’t add up. Younger generations might accept it as normal. It’s up to us to warn them. No?

  • DeathByBigSad@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Genisys is Skynet. When Genisys comes online, judgment day begins. You can kill BLAST THAT CLANKER BEFORE IT’S NUKES US ALL

    🤖💥🔫

    (Okay I know some people hate the 5th movie, but honestly, the opening scene (the John Connor speech thingy) was fire tho)

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      22 hours ago

      Making plans for AI based on Terminator movies is like making plans for earthquake preparedness based on the movie 2012, or pandemic preparedness plans based on the movie World War Z.

      Bear in mind that the primary goal of movies like these is to thrill the audience, not to be “accurate” in any particular way.