LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.
“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”
Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.
I’m sorry, you fucking what? How about you test the world’s population in PhD level history and see if you get a 46%? Are you fucking kidding me? You’re telling me this machine is half accurate on PhD history and you’re tryna act like that doesn’t just make your entire history department fucking useless? At most, you have 5 years until it’s better at the job than actual humans trained for it, because it’s already better than the public at large.
50% is decent, if it had any idea of when it actually was correct or not. But 50% is not very good, when the 50% that’s faulty, results in it going of on a long tangent spewing lies. Lies that are incredibly real looking, takes immense knowledge or huge amounts of time to check.
If you’re well versed enough in the subject to spot the lies, you likely wont get much help from AI. And if you aren’t, well, you’re going to be learning a lot of incorrect information. Or spend ridiculous amount of times fact checking.
Works a bit like that for software developing at the moment. AI is incredibly at spewing out code quickly. But the time won by copying it, is lost looking for errors that are extremely well hidden.
For it to be a totally fair test you’d be testing the worlds population in an open book exam since the model likely has every history book they could find in its training data.