I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I’m wondering though if Reddit is still reverting these changes?

  • GBU_28@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    8 months ago

    Wouldn’t be hard to scan a user and say:

    • they existed for 5 years.
    • they made something like 5 comments a day. They edit 1 or 2 comments a month.
    • then randomly on March 7th 2024 they edited 100% of all comments across all subs.
    • use comment version March 6th 2024
    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 months ago

      It would.

      First you’d need to notice the problem. Does Google even realise that some people want to edit their Reddit content to boycott LLM training?

      Let’s say that Google did it. Then it’d need to come up with a good (generalisable, low amount of false positives, low amount of false negatives) set of rules to sort those out. And while coming up with “random” rules is easy, good ones take testing, trial and error, and time.

      But let’s say that Google still does it. Now it’s retrieving and processing a lot more info from the database than just the content and its context, but also account age, when the piece of content was submitted, when it was edited.

      So doing it still increases the costs associated with the corpus, making it less desirable.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Huh? Reddit has all of this plus changes in their own DBs. Google has nothing to do with this, it’s pre handover.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          8 months ago

          I’m highlighting that having the data is not enough, if you don’t find a good way to use the data to sort the trash out. Google will need to do it, not Reddit; Reddit is only handing the data over.

          Is this clear now? If you’re still struggling to understand it, refer to the context provided by the comment chain, including your own comments.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            8 months ago

            I’m saying reddit will not ship a trashed deliverable. Guaranteed.

            Reddit will have already preprocessed for this type of data damage. This is basic data engineering and trivial to do to find events in the data and understanding timeseries of events.

            Google will be receiving data that is uncorrupted, because they’ll get data properly versioned to before the damaging event.

            If a high edit event happens on March 7th, they’ll ship march 7th - 1d. Guaranteed.

            Edit to be clear: you’re ignoring/not accepting the practice of noting high volume of edits per user as an event, and using that timestamped event as a signal of data validity.

            • Lvxferre@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              8 months ago

              I’m saying reddit will not ship a trashed deliverable. Guaranteed.

              Nobody said anything about the database being trashed. What I’m saying is that the database is expected to have data unfit for LLM training, that Google will need to sort out, and Reddit won’t do it for Google.

              Reddit will have already preprocessed for this type of data damage.

              Do you know it, or are you assuming it?

              If you know it, source it.

              If you’re assuming, stop wasting my time with shit that you make up and your “huuuuh?” babble.

              • GBU_28@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                8 months ago

                I know it because I’ve worked in corporate data engineering and large data migrations and it would be abnormal to do anything else. there’s a full review of test data, a scope of work, an acceptance period, etc.

                You think reddit doesn’t know about these utilities? You think Google doesn’t?

                You need to chill out and acknowledge how an industry works. I’m sure you are convinced but your idea of things isn’t how the industry works.

                I don’t need to explain to you that the sky is blue. And I shouldn’t need to explain to you that Google isn’t going to accept a damaged product, and that reddit can or can’t do some basic querying and timeseries manipulations.

                Edit like you literally asked for a textbook.

                • Lvxferre@mander.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  8 months ago

                  I know it because I’ve worked in corporate data migrations

                  In other words: “I dun have sauce, I’m assooming, but chruuuust me lol”

                  At this rate it’s safe to simply ignore your comments as noise. I’m not wasting further time with you.

                  • GBU_28@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    8 months ago

                    Seems like people are voting your comment as noise but whatever.

                    You are trying to prove something normal ISNT happening. I’m describing normal industry behavior.

                    Seems like you need to prove an abnormal sitch is occuring.

                    Edit it’s like your asking for proof that they’ll build stairs with a hand rail

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      It sounds like what’s needed here is a version of this tool that makes the edits slowly, at random intervals, over a period of time. And perhaps has the ability to randomize the text in each edit so that they’re all unusable garbage, but different unusable garbage (like the suggestion of taking ChatGPT output at really high temp that someone else made). Maybe it also only edits something like 25% of your total comment pool, and perhaps makes unnoticeably minor edits (add a space, remove a comma) to a whole bunch of other comments. Basically masking the poison by hiding it in a lot of noise?

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        Now you’re talkin .

        Intra comment edit threshold would be fun to explore