ThisIsFine.gif

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    There are several separate issues that add up together:

    • A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
    • Ability to call external helper tools that allow it to interact with, and control other systems
    • Training corpus that includes:
      • How to program an LLM, and the system itself
      • Solutions to programming problems
      • How to use the same helper tools to copy and deploy the system or parts of it to other machines
      • How operators (humans) lie to each other

    Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.

    When you add developers using the AI itself to help in developing the AI itself… expect shit squared.