• flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    1 year ago

    I cloned my own voice to prank a friend, and… Wow, it was a gut-dropping moment when I understood just how dangerous this tool is for precisely this type of scam.

    It's one thing to hear about it, but to actual experience it… Terrifying.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        Oh, it was nothing more than just showing off the technology, really. It wasn't a committed bit.

        I cloned my voice then left a voicemail that said something like: "hey buddy it's me. My car broke down and I'm at… Actually I don't know where I'm at. I walked to the gas station and borrowed this guy's phone. He said he'll give me a ride into to town if I can get him $50 bucks. Could you venmo it to him at @franks_diner? I'll get you back as soon as I can find my phone. … By the way this is really me, definitely not a bot pretending to be me."

  • Drusas@kbin.social
    link
    fedilink
    arrow-up
    28
    arrow-down
    4
    ·
    1 year ago

    As someone who has an uncanny ability to recognize voices, I'm skeptical about how good these really are. Of course, most people don't share that ability.

    Meanwhile, I could probably be fooled by a picture.

    • PilferJynx@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Hmm, I understand your sentiment, but how would you know. Of course you'd pick out the bad dupes but this technology is getting really good that I fear it would go unnoticed, especially if they keep the detectable ones to reinforce bias

      • Drusas@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        For me, it could be either. Some of us recognize people by their voices more than by their faces.

    • gregoryw3@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t have examples but having listened to some samples of various Ai generated clones (the one paper had samples of I believe 10s, 30s, 1min, 5 min) and all of them progressively sounded better. The 10 second one basically sounded like a voice call whose bit rate dropped out mid word. And the voice so long as you used words that were similar in phoenix sounded pretty close. Although this is just my experience, but to you it might sound pretty bad while to me it sounded pretty reasonable if under bad audio conditions.

      https://github.com/CorentinJ/Real-Time-Voice-Cloning

      This is the main one I’ve seen examples of. You’ll have to find the samples yourself, I believe it was in the actual paper?

      • Arthur Besse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        That code was state of the art (for free software) when the author first published it with his master's thesis four years ago, but it hasn't improved a lot since then and I wouldn't recommend it today. See the Heads Up section of the readme. Coqui (a free software Mozilla spinoff) is better but also is sadly still nowhere near as convincing as the proprietary stuff.

        • gregoryw3@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Wait it’s been 4 years? Time really goes by. Yeah with most Ai things I assumed those with more time and resources would create better models. OS Ai is at a great disadvantage when it comes to data set size and compute power.

    • Catfish [she/her]@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Yeah but they'll call your family. A friend of mine was recently affected by this, a scammer had a clone of her voice asking for around $300 to fix their car because they got stranded in the middle of nowhere. So they call up your parents and to your mom it's like "Oh no! My baby! Of course I'll help you!" and your mom gives them $300 thinking it's you.

      • Heratiki@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yeah my family knows better. I don’t call anyone either plus I’ve got all of my family on DEFCON 1 when it comes to asking for money. Had someone try and scam my mom via Facebook pretending to be my sister. I have family members contacting me ALL the time with issues with their stuff so they don’t trust anything at all.

        This all stems from myself getting scammed nearly 20 years ago via email so I’ve educated everyone immensely.

  • sramder@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Anyone know how many hours of training data it takes to build up a convincing model of someone’s voice? It was 10’s of hours when I did a bit of research a year ago… the article says social media is the likely source of training data for these scams, but that seems unlikely at this point.

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      1 year ago

      I don't remember the exact number but I did see an article recently that said it was videos on social media like you surmised.

      And it was a pretty minimal amount of data needed. Definitely not tens of hours. Less than one hour iirc.

      • Rozz@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Is it safe to assume that if you don't have any family that posts videos to Facebook/socials you are in a safer place?

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          if you don't have any family that posts videos to Facebook/socials you are in a safer place?

          You are safe only if you don't have any people at all whom you trust.

          But then you are having some other problems…

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The technology has clearly come a long way in a short time, really fascinating.

        I remember the first examples I read about being trained with celebrity read audiobooks because they needed so much audio data. I want to say Tom Hanks or Anthony Hopkins but I could have that confused with something else.

    • CrabLangEnjoyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they're using a low quality phone call so maybe not that important

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That’s downright scary :-) I think it took longer in the last Mission Impossible.

        30 minutes is still pretty minimal for the kind of targeted attack it sounds like this is used for. I suppose we all need to work with our families on code words or something.

        I went in thinking the article was a bit alarmist, but that’s clearly not the case. Thank for the insight.

      • madsen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        With that little, they may be able to recreate the timbre of someone's voice, but speech carries a multitude of other identifiers and idiosyncrasies that they're unlikely to get with that little audio, like personal vocabulary (we don't choose the same words and phrasings for things), specific pronunciations (e.g. "library" vs "libary"), voice inflections, etc. Obviously, the more training data you have, the better the output.

    • DontMakeMoreBabies@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      I literally just cloned someone's voice for a presentation on AI and did it using maybe 30 total minutes of audio…

      Took me about an hour and it was free. Hardest part was clipping the audio to get the 'good bits.'

      The voice was absolutely convincing.

    • Johanno@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The most advanced Model I know just needs half an hour of your voice or sth.

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Someone else mentioned that Microsoft has one capable of working with far less material.

        But 30 minutes is definitely short enough to make this sort of scam/attack feasible in my mind.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    18
    ·
    1 year ago

    Whomever is stupid enough to think that Tom Hanks is calling you personally probably needs a court appointed guardian.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      53
      arrow-down
      1
      ·
      1 year ago

      Did you read the article? It’s talking about taking kids voices from TikTok and shit. Social media. People have been posting videos of themselves talking for years. That’s enough data to train an ai to leave a message saying, “mom, I lost my phone and I’m in trouble. I need some money.” Or something of that sort. It’s been happening for a long time. This is only making it more confincing

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I'm so fucking glad that I've hardly ever had my voice and likeness posted publicly on the internet

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Same. I managed to stay off of social media, and I was the prime age for it at every turn. MySpace came around when I was in middle school/early high school. Facebook was opened up to everyone in late high school. Instagram came around when I was in college—and when I was traveling. I’m so glad I was that super annoying kid calling everything a conspiracy to steal my likeness/steal my data…who knew my need to be a contrarian as an anarchist teen would be so helpful?

          I mean…I also grew up into an anarchist adult. So I just got lucky that I found the right books and music to push me in that direction young.

            • TheFriar@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              A lot of crimethinc., Emma Goldman, and adbusters in high school (adbusters isn’t a book, but it was still deep in my repertoire). From there, Hannah arendt, Chomsky, etc. in late high school/college. I also listened to a lot of anti-flag, against me!, propagandhi, strike anywhere…all of my media was very anarchist/anti govt/anti capitalist. I stood no chance lol.

              And as someone who was young enough to feel angry (and justifiably so…bush/Cheney and the patriot act were all happening. I had plenty reason to be wary of spying), admittedly I was following these things and knew what was happening, but I was still just a contrarian at heart, I could yell and argue with my parents friends, but I probably sounded like an ass. I didn’t fully know how to hold these beliefs. They were more knee jerk reactions fueled by hormones and an insane set of circumstances in the world. A lot of my embarrassing memories that come to me randomly when I’m trying to fall asleep have to do with being up in arms about something I wasn’t really qualified to speak on lol

              I’m sure I was more annoying than I was inspiring

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        1 year ago

        The reference of the entire article is talking about scammers using AI models of voice you know and understand. None of these scam rings have the time to break it down to your family.

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          You sure? It’s very easy for these scammers to make a bot to trawl those “address/people lookup” sites, get family names and numbers, and then search for anyone in there’s public social media, and compile that footage. It wouldn’t be much work at all after creating the bot. Those creepy people lookup sites list an absurd amount of information. It would make doing this very easy. And think of how much work already goes into scams that use sheer numbers to boost likelihood of working with a basic ruse. If they can trim that list of available phone numbers down to—even if it were just 30%, or 15% of available phone numbers now with personal information and an in by imitating someone they know and love? That’s still a fuck load of people. And the likelihood of success would shoot WAY up while actually cutting down on the amount of work they’d need to do. So I’d argue you have that backwards.

    • Margot Robbie@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Unless you actually know Tom Hanks personally and are expecting a call from him, of course.

  • TheFriar@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    1 year ago

    The Industrial Revolution and its consequences have been a disaster for the human race.