Breakthrough Technique: Meta-learning for Compositionality

Original :
https://www.nature.com/articles/s41586-023-06668-3

Vulgarization :
https://scitechdaily.com/the-future-of-machine-learning-a-new-breakthrough-technique/

How MLC Works
In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.

  • A_A@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Edit : Please read @DigitalMus@feddit.dk’s comment before mine.


    Hey folks, I believe this is really big.

    Traditional deep neural network’s training requires millions of example and so, despite its great success, is immensely inefficient.

    Now what if learning of these machines was as fast or faster than a human’s ? Well, it seems this is it.

    Look at how large language models are disruptive for many sectors of society. This new technology could accelerate the process exponentially.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      Traditional deep neural network’s training requires millions of example and so, despite its great success, is immensely inefficient.

      Is this a limited advancement in training techniques? Right now I’m working on several types of image classification models. How would this be able to help me?

          • QueriesQueried@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Admittedly, they were quoting someone else in the message you responded to. That may have been edited after the fact, but the person they’re quoting did in fact say those words (“this is big”).

            It was I who couldn’t read, as that is not what happened.

          • A_A@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            I am not sure what “image classification models” incompasses. I would have to read more and understand and I don’t have enough time and energy.
            Yet in the past I have read and understand a few books about neural networks and this new article in nature is something else : it’s clear when reading it.
            ( also to @TropicalDingdong@lemmy.world )

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              8 months ago

              I mean is this any different than standard gradient descent with something like Adam as optimiser.

              That’s my assumption based on the headline. But the quick skim I gave the article seemed to only discuss it in the context of NLP. Not exactly my field of study.

    • Stantana@lemmy.sambands.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      It’s over for us useless eaters, no matter how useful one is we will always be useless compared to what’s coming.

      • A_A@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        There is still some hope ; maybe the machines will have more compassion than humans do, or maybe we are inside the matrix already as useful parts.
        There are so many unknowns in the future and our insights are so limited.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Now what if learning of these machines was as fast or faster than a human’s ?

      What do you mean? It’s already faster than human’s. I takes years for a person to learn basic language and decades to gain expert knowledge in any field.

      • A_A@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        What is meant here (and said as such in the article) is that humans can learn from a single example while deep neural networks takes thousands or millions (of examples) to learn.

        • ExLisper@linux.community
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Ok, but neural networks can process way more examples per second so ‘faster’ is not really the right term here.

          • A_A@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Yes you are right. And I was hoping for someone more knowledgeable to help clarify this topic.

            Well I was lucky with the comment of @DigitalMus in here, if you would like to read it.