• 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    There is nothing in LLMs that is able to verify the truth. They should not be used for accurate information unless we make some sort of technological breakthrough on that front. It's really good at generating plausible text though.

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      People, and the internet are no different. The vast majority of information that exists is incomplete or wrong at some level. Skepticism is always required but assessing any medium by its performance without premeditated bias is the only intelligent approach that can grow with improving technology. Very few people are running the larger models (like a 65B or larger) that they fully control in an environment where they control every aspect of the LLM. I have such a setup on my hardware running offline. On its own my system is ~95% accurate on the tasks I use it for and it is more accurate at these than results I find when searching the internet.

      There are already open source offline models specifically designed to work on scientific white paper archives where every result cites the source from its database.

      Agents are a class of AI where the AI is running a multi-model system and where one model can send the prompt to more specialized models or a series of models equip to check and verify a response and do things like cite sources or verify against a database.