• Iteria@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    To be honest, I’m not sure how one could reasonably regulate AI. It feels like “locks are for honest people. For everyone else, there’s windows.” Unlike a nuclear bomb, it’s fairly trivial to make an AI of any sort. I think it’ll be interesting to see how we catch rogue actors in this space.

    • deejay4am@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I think something reasonable might be “no using AI to automatically manufacture or control anything” until we have better tools to understand how an AI might made decisions and prevent undesirable outcomes.

      Right now, the models aren’t really suited for that anyway (they’re not conscious or continuous, they’re more like insanely complex decision trees; input->process->output). Humans can validate that output for flaws.

      AIs right now do not know the difference between lying and telling the truth. They have no moral imperative one way or another. They’re not conscious, they’re literally just stringing words together as similarly as possible to how humanity itself collectively has within the training data that has been fed to it.

      It’s actually pretty shitty marketing hype to call it “AI”; it’s not intelligent. “Machine Learning” is much more accurate. “Intelligent” implies to the general public that we may not be the smartest creatures anymore and now we must share our domain with something else; which is why there is panic and outrage. This is only fueled by (the very valid) complaints from creative and technical workers who’s very jobs can be easily mimicked and automated away.

      Here’s a good regulation: if you use AI to reduce your workforce, you must continue to pay those workers 90% of their salary in perpetuity while you use AI of any form to replace their labor, including benefits and retirement. Let’s begin the transition to UBI!

  • Candelestine@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This has been my biggest concern. Getting it wrong will simply drive certain types of research underground or overseas, where they will still happen, merely less controlled.

    It’s an issue that needs to be approached very carefully. I don’t find myself advocating for government beaurocracy very often, but AI deserves its own subcommittees in the House and Senate, so a handful of politicians can at least try to take the time to become specifically educated on the topic. Then they can be douchebags about it, but at least they won’t be able to claim ignorance.

    • cryshlee@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I feel that in order for that to happen, we really need younger representatives who have an understanding of how AI works and its current/potential impacts. I really don’t see any of the older politicians having anywhere close to the understanding or willingness to learn about this kind of technology in a way that would lead to productive discourse and regulation.