“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they added. “We term this condition Model Autophagy Disorder (MAD).”
Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.
Note that humans do not exhibit this property when trained on other humans, so this would seem to prove that “AI” isn’t actually intelligent.
do we even need to prove this? Like anyone study a bit how generative AI works know it’s not intelligent.