Funny how the average person figured this out almost immediately while Google needed half a year to figure it out with their researchers. Almost like they were ignoring it as long as they could for the sake of profit. Fuck around and find out, I guess.
Where were the signs? Why didn’t someone warn us?!
If only anyone - anyone at all - could have foreseen this horrible outcome
Version that doesn’t require an account: https://archive.is/OA7Jb
Google Researchers Now Also Say We All Should Use Their Shit AI Search That Tells Us To Eat Glue
GRNASWASUTSASTTUTEG.
It’s alright guys–I just looked up a solution and Google suggests eating glue and a few small pebbles will solve the issue.
With their shitty AI this belongs on not the onion.
They’re admitting that they are the source of a massive problem. But are they going to do anything about it, or keep pushing their shitty, half-baked AI? It’s crazy to me how much worse their AI is than ChatGPT, considering all of the financial and engineering resources available to Google.
Ahh, just in time for the election season.
I think almost certainly that disinformation based on fake accounts simply posting memes or targeted viewpoints, hoping to send the message through sheer repetition, it still a lot more common than doctored factual information. (Not that that means that faked up disinformation isn’t a problem - just saying I think it’s still relatively rare as a vehicle for disinformation.)
Why would you even open yourself up to “see, the underlying citation for this thing they’re saying is not true” when you might as well not even enter into the sphere of backing up what you’re saying with facts, and just state your assertions as if they were facts, instead.