OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

  • Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    125
    arrow-down
    3
    ·
    7 months ago

    So staff requested the board take action, then those same staff threatened to quit because the board took action?

    That doesn’t add up.

    • FrostyTrichs@lemmy.world
      link
      fedilink
      English
      arrow-up
      109
      ·
      7 months ago

      The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.

      • db2@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        35
        arrow-down
        6
        ·
        edit-2
        7 months ago

        That’s exactly what it is. A ploy for free attention and it’s working.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            4
            ·
            7 months ago

            ploy
            /ploi/
            noun
            a cunning plan or action designed to turn a situation to one’s own advantage.

            Except for the cunning part it seems to be a pretty good description.

            • pulaskiwasright@lemmy.ml
              link
              fedilink
              English
              arrow-up
              14
              ·
              7 months ago

              There’s no way the board members tarnished their reputations and lost their jobs so they could get attention for a company they no longer work for and don’t have a stake in. That’s just silly.

              • assassinatedbyCIA@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                7 months ago

                I don’t think the firing was a ploy, but I do think the retroactive justification of ‘we were building a model so powerful it scared us’ is a ploy to drum up hype. Just like all the other times they’ve said the same thing.

        • Identity3000@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          7 months ago

          That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.

          Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

          None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            7 months ago

            Why would the Board willingly trash their reputation?

            What reputation?

            Why would they drag the former Twitch CEO through the mud and make him look weak and powerless?

            Why would they care about that?

            Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

            Microsoft has put their entire sack in OpenAI’s purse. They could literally do or say anything to Microsoft.

            Are you telling me you really think it’s outlandish to think the same people who push a glorified nested ‘if’ statement as AI would do what it said to do? Those people are goofy, if they thought they were being given a convoluted real life quest by a digital DM they’d be all about it.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            What’s that got to do with anything? They sell a thing, they want the thing to sell more.

            • Echo Dot@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              I think pretty much the entire world knows about chat GPT so clearly advertising isn’t an issue for them. Firing your CEO is not really a good look unless you’ve got a very very good reason in which case you should announce it.

              • db2@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Which they didn’t because it’s fake grandstanding bullshit.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      53
      ·
      7 months ago

      OpenAI loves to “leak” stories about how they’ve developed an AI so good that it is scaring engineers because it makes people believe they’ve made a massive new technological breakthrough.

      • Taleya@aussie.zone
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        Meanwhile anyone who works tech immediately thinks “some csuite dickhead just greenlit ED-209”

    • RedditWanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      7 months ago

      More like:

      • They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.

      • Chief scientist dude tells the board Sam has plans for it already

      • Board says Sam is going too fast with his “breakthroughs” and fires him.

      • Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done

      • Microsoft

      My bet is the board freaked out at how “powerful” they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they’ve shown they’ll take drastic actions without asking on things they don’t understand.

    • maegul@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 months ago

      There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.

      What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.