Did nobody really question the usability of language models in designing war strategies?

  • anteaters@feddit.de
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Did nobody really question the usability of language models in designing war strategies?

    Correct, people heard “AI” and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.

  • m-p{3}@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    9 months ago

    Of course, LLM is simply copying the behavior of most people, and most people would resort to that as well.

    And they probably trained it on Civ, and Gandhi was chosen as the role model.

  • OldWoodFrame@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    9 months ago

    Makes a lot of sense AI would nuke disproportionately. For an AI, if you do not set a value for something, it is worth zero. This is actually the base problem for AI: Alignment.

    For a human, there’s a mushy vagueness about it but our cultural upbringing says that even in war, it’s bad to kill indiscriminately. And we value the future humans who do not yet exist, we recognize that after the war is over, people will want to live in the nuked place and they can’t if it’s radioactive. There’s a self-image issue where we want to be seen as a good person by our peers and the history books. There is value there which is overlooked by programmers.

    An AI will trade infinite things worth 0 for a single thing worth 1. So if nukes increase your win percentage by .1%, and they don’t have the deterrence of being labeled history’s greatest monster, they will nuke as many times as they can.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      9 months ago

      That explanation is obviously based on traditional chess AI. This is about role-playing with chatbots (LLMs). Think SillyTavern.

      LLMs are made for text production, not tactical or strategic reasoning. The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        Especially if its training material included comments from the early 00s. There was a lot of “nuke it from orbit” and “glass parking lot” comments about the Middle East in the wake of 911.

        And with the glorified text predictors that LLMs are, you could probably adjust the wording of the question to get the opposite results. Like, “what should we do about the Middle East?” might get a “glass parking lot” response, while “should we turn the middle East into a glass parking lot?” might get a “no, nuking the middle East is a bad idea and inhumane” because that’s how those conversations (using the term loosely) would go.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.

        That’s not necessarily true, there is a lot of violent fiction.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.

  • Chickenstalker@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    It’s a WAR GAME. Emphasis on war and game. Do you chuckle fucks think wargame players should emphasize kumbaya sing dance or group therapy sessions in their games?

    • GiveMemes@jlai.lu
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 months ago

      If the goal is to win and overwhelming force is an option, that option will always win. On the contrary, in the modern world, humans tend to try to find non-violent means in order to bring an end to wars. The point is that AI doesn’t have humanity but is still being utilized by militaries (or at least that’s what I think)

  • lolcatnip@reddthat.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    I am shocked—shocked!—to find out that a technology performs poorly when applied to a task it’s completely unsuited for!

  • Eggyhead@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    9 months ago

    whaaat? Robots don’t just have their own inherent sense of morality for whatever reason???

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Did nobody really question the usability of language models in designing war strategies?

    They got some nice clickbait out of it. And that’s how dumb af ideas turn into smart career moves.

    I hope no one is coming away with the idea that this about something the military is actually doing.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    Whenever we have disrupting technological advancements, DARPA looks at it to see if it can be applied to military action, and this has been true with generative AI, with LLMs and with sophisticated learning systems. They’re still working on all of these.

    They also get clickbait news whenever one of their test subjects does something whacky, like kill their own commander in order to expedite completing the mission parameters (in a simulation, not on the field.) The whole point is to learn how to train smart weapons to not do funny things like that.

    So yes, that means on a strategic level, we’re getting into the nitty of what we try to do with the tools we have. Generals typically look to minimize casualties (and to weigh factors against the expenditure of living troops) knowing that every dead soldier is a grieving family, is rhetoric against the war effort, is pressure against recruitment and so on. When we train our neural-nets, we give casualties (and risk thereof) a certain weight, so as to inform how much their respective objectives need to be worth before we throw more troopers to take them.

    Fortunately, AI generals will be advisory to human generals long before they are commanding armies, themselves, or at least I’d hope so: among our DARPA scientists, military think tanks and plutocrats are a few madmen who’d gladly take over the world if they could muster a perfectly loyal robot army smart enough to fight against human opponents determined to learn and exploit any weaknesses in their logic.