OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • SheeEttin@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    1 year ago

    Generally you’re correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they’re less relevant here.)

    So yeah, tracing a comic book to learn drawing is totally fine, as long as that’s what you’re doing it for. Tracing a comic to reproduce and sell is totally not fine, and that’s basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      What about the case where you’re tracing a comic to learn how to draw with the intent of using the new skills to compete with who you learned from?

      Point of the question being, they’re not processing the images to make exact duplicates like tracing would.
      It’s significantly closer to copying a style, which you can’t own.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I meant “learning” in the strict sense, not institutional education.

      I think you are simply mistaken about what AI is typically doing. You can test your “tracing” analogy by making an image with Stable Diffusion. It’s trained only on images from the public internet, so if the generated image is similar to one in the training data, then a reverse image search should turn it up.