Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

  • lloram239@feddit.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    10
    ·
    edit-2
    1 year ago

    because they give more accurate information, that simply is not true.

    From my experience with BingChat, it's completely true. BingChat will search with Bing and summarize the results, providing sources and all. And the results are complete garbage most of the time, since search results are filled with garbage.

    Meanwhile if you ask ChatGPT, which doesn't have Internet access, you get a far more sophisticated answer and correct answer. You can also answer follow up questions.

    Web search is an absolutely terrible place for accurate information. ChatGPT in contrast consumes all the information out there, which makes it much harder for incorrect information to slip in, as information needs to be replicated frequently to stick around. It can and often is still wrong of course, but it is far better than any single website you'll find.

    And of course all of this is still very early days for LLMs. GPT was never build with correctness in mind, it was build to autocomplete text, everything else was patchwork after the fact. The future of search is AI, no doubt about that.

    • sndrtj@feddit.nl
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.

      A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.

      • pascal@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        Ask chatgpt "tell me the biography of the famous painter sndrtj" to see how good the bot is at hallucinating an incredible realistic story that never happened.

        • CarlsIII@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          You don’t even have to make stuff up to get it to hallucinate. I once asked chat gpt who the original bass player was for Metallica was, and it repeatedly gave me the wrong answer, and even at one point said “Dave Ellefson.”