The Federal Trade Commission is taking action against multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.

The cases being announced today include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts.

“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 months ago

    That seems sensible.

    Even a hypothetically true artificial general intelligence would still not be a moral agent, thus it cannot be held responsible for its actions; as such, whoever deploys and maintains it should be held responsible. That’s doubly true with LLMs as they aren’t even intelligent to begin with.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      Even a hypothetically true artificial general intelligence would still not be a moral agent

      That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.

      It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        To be a moral agent, your actions towards others need to have consequences for yourself - be those consequences direct, social, emotional, or something else. And intelligence on itself doesn’t provide those consequences.

        The nearest that you could do, with AGI alone, would be to hardcode it with ethical principles, but that’s another matter. (I’m saying this because people often conflate ethics and morality, even if they’re two different cans of worms.)