“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they added. “We term this condition Model Autophagy Disorder (MAD).”
Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.
Note that humans do not exhibit this property when trained on other humans, so this would seem to prove that “AI” isn’t actually intelligent.
Almost as if current models are fancy token predictors with no reasoning about the input
Wasn’t the echo chambers during the covid pandemic kind of proof that humans DO exhibit the same property? A good amount will start repeating stuff about nanoparticles and some black lint in a mask are worms that will control your brain?
That only happened to some humans. Something must be seriously wrong with them.
Are we sure they were humans? Maybe they were ChatGPT 2.
Current AI is not actually “intelligent” and, as far as I know, not even their creators directly describe them as that. The programs and models existing at the moment aren’t capable of abstract thinking oder reasoning and other processes that make an intelligent being or thing intelligent.
The companies involved are certainly eager to create something like a general intelligence. But even when they reach that goal, we don’t know yet if such an AGI would even be truly intelligent.
Key point here being that humans train on other humans, not on themselves. They are also always exposed to the real world.
If you lock a human in a box and only let them interact with themselves they go a bit funny in the head very quickly.
The reason is different from what is happening with AI, though. Sensory deprivation or extreme isolation and the Ganzfeld effect lead to hallucinations because our brain seems to have to constantly react to stimuli in order to keep functioning. Our brain starts creating things from imagination.
With AI it is the other way around. They lose information when presented with the same data again and again because their statistical models look for probabilities.
Humans are not entirely trained on other humans, though. We learn plenty of stuff from our environment and experiences. Note this very important part of the primary conclusion:
Math for example is something one could argue is purely taught by humans.
Dogs can do math and I’m quite sure I’ve never taught my dog that deliberately.
Even for humans learning it, I would expect that most of our understanding of math comes from everyday usage of it rather than explicit rote training.