• Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    19 days ago

    Ok how would that work:

    find me some good recipes for hibachi style ginger butter

    AI model returns 10 links, 4 of which don’t actually exist (because it hallucinated them)? No. If they didn’t exist, it wouldn’t have returned them because it wouldn’t have been able to load those URLs.

    It’s possible that it could get it wrong because of some new kind of LLM scamming method but that’s not “making shit up” it’s malicious URLs.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 days ago

      If they didn’t exist, it wouldn’t have returned them

      And yet I’ve had Bing’s Copilot/ChatGPT (with plugins like Consensus), Gemini, and Perplexity do exactly that, but worse. Sometimes they’ll cite sources that don’t mention anything related to the answer they’ve provided because the information they’re giving is based on some other training data they can’t source. They were asked to provide a source, but won’t necessarily give you the source. Hell, sometimes they’ll answer an adjacent just to spit out an answer–any answer–to fulfill the request.

      LLMs are simply not the appropriate tool for the job. This is most obvious when you need the specificity and accuracy.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 days ago

        Yeah… The big commercial models have system prompts that fuck it all up. That’s my hypothesis, anyway.

        You have to try it with an open source model. You tell it to turn the titles, URLs, and nothing else. That seems to work fantastic 👍

        I’m doing it with Open WebUI and ollama cloud which is open source models that you could run locally—if you have like $5,000 worth of hardware.