• Iteria@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    To be honest, I’m not sure how one could reasonably regulate AI. It feels like “locks are for honest people. For everyone else, there’s windows.” Unlike a nuclear bomb, it’s fairly trivial to make an AI of any sort. I think it’ll be interesting to see how we catch rogue actors in this space.

    • deejay4am@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I think something reasonable might be “no using AI to automatically manufacture or control anything” until we have better tools to understand how an AI might made decisions and prevent undesirable outcomes.

      Right now, the models aren’t really suited for that anyway (they’re not conscious or continuous, they’re more like insanely complex decision trees; input->process->output). Humans can validate that output for flaws.

      AIs right now do not know the difference between lying and telling the truth. They have no moral imperative one way or another. They’re not conscious, they’re literally just stringing words together as similarly as possible to how humanity itself collectively has within the training data that has been fed to it.

      It’s actually pretty shitty marketing hype to call it “AI”; it’s not intelligent. “Machine Learning” is much more accurate. “Intelligent” implies to the general public that we may not be the smartest creatures anymore and now we must share our domain with something else; which is why there is panic and outrage. This is only fueled by (the very valid) complaints from creative and technical workers who’s very jobs can be easily mimicked and automated away.

      Here’s a good regulation: if you use AI to reduce your workforce, you must continue to pay those workers 90% of their salary in perpetuity while you use AI of any form to replace their labor, including benefits and retirement. Let’s begin the transition to UBI!