I was thinking about this after a discussion at work about large language models (LLMs) - the initial scrape of the internet before Chat GPT become publicly usable was probably the last truly high quality scrape of human-made content any model will get. The second Chat GPT went public, the data pool became tainted with people publishing information from it. Future language models will have increasingly large percentages of their data tainted by AI-generated content, skewing the results away from how humans actually write. To get actual human content, they may need to turn to transcriptions of audio recordings or phone calls for training, and even that wouldn’t be quite correct because people write differently than they speak.
I sort of wonder if eventually people will start being influenced in how they choose to write based on seeing this AI content. If teachers use AI-generated texts in school lessons, especially at lower levels, will that effect how kids end up writing and formatting their work? It’s weird to think about the wider implications of how this AI stuff will ultimately impact society.
What’s your predictions? Is there a future where AI can get a clean, human-made scrape? Are we doomed to start writing like AIs?
I think it could end up being a problem that we face in the future, but probably not an insurmountable one.
For one, I suspect that clean data sources will always be available, though it could become a lot more expensive to obtain. As an extreme example, you could always source your data by recording in-person conversations.
Also, as AI improves, I’m guessing it will be able to handle bad data more gracefully, and that it should be able to train to the same effectiveness while using a smaller dataset.
I feel like if you tried to train an LLM on spoken conversational English the output would just be “yeah um yeah um yeah um”
But on a more serious note spoken English is very different than written.
Either way you can find validated sources of human written text it just won’t be as easy.
Maybe an LLM that can have a normal sounding spoken conversation will be a next step. The Turing test but speaking instead of typing. I assume the neural networks could learn things like intonation.
Writing is not easy, people go to college for years to learn how to do it, unless the actual skill of writing can be instilled into an LLM, they won’t replace people.
The companies that try and use them to replace writers now are the companies that will feel the repercussions first: poor quality, no experienced employees, and lack of business.
An LLM will never be able to replace writers, they lack an understanding of the core concepts that are actually involved.
Besides, who is going to write things to train the AI?
It’s not going to replace actual dedicated writers, but it’s definitely going to hinder people learning to write and make up a large portion of the text online. It may also make it harder for actual writers to be found in all the noise. I heard a little while back about a scifi magazine which had to close its submissions because it was getting too many AI-written stories and sorting through the real versus fake was becoming difficult for them.
As for who’s going to train the AI, that’s part of what I’m arguing here - future LLMs are going to wind up being trained on AI-generated text because there will be so much of it online that screening it out becomes near impossible. Reddit mods already have challenges screening out chat GPT bots from their comments. When a future LLM scrapes the web for writen words, it’ll come back with lots of garbage AI text which will taint its learning pool. AIs will learn from AIs and become worse for it.
Yes but many industries, writing included, already suffer from fraudulent activity.
One of the largest reasons why entry-level software engineering jobs require five years experience is because consulting firms train developers (making the juniors) and have them interview as senior developers.
The firm manages the job search, helps the consultants during interviews, and they have teams helping with the actual work as well as fitting in with the rest of the, more experienced (or also fraudulent), staff.
There are currently industries which make a profit off of fooling other companies and consumers, which are arguably more frightening.
If anything, this will increase demand for better human writers and ways to authenticate their work. If it doesn’t, we’ll get sick of the content AI creates before it gets too bad.
You are right in assuming there will be a symbyosis between AI generated text and human generated text, but jumping from there to assume that we will be using solely AI generated text is wrong, in my opinion.
AI generated content is not good enough on its own, despite what OpenAI marketing team wants you to think. No quality content is made by simply prompting chatGPT. Not just in writing, but in any field of knowledge, actually. Using chatGPT without some level of domain and fact checking on the subject you are prompting is a sure way to get screwed, as some lawyer in the USA will tell you.
But going back to writing specifically, what we will see at first is actually an improvement on the overall quality of human generated writing, with AI offering a solution to the mechanical and usually boring side of writing good content, such as eloquence, syntax, clarity, etc.
Then, what we will also begin to notice is the more frequent use of what I like to call shitstorming.
Shitstorming consisting in prompting a LLM model to bring up ideas, drafts and opinions on subjects you want to write about, and have some understanding on. What you will receive in response will be a biased, somewhat lacking content, which will either inspire you to modify and refactor in a way that it makes sense, or make you so angry that you will have to write something better in response to it. Writer’s block will become a thing of the past.
There are others aspects and nuances to this symbiosis, but to avoid going longer on an already long post, I would conclude by saying that this evolution will be a loop that will keep improving LLMs, while also improving human writing simply because we will continuously look for ways to make the content better, and more original.
The bad side is that, for those that don’t know how to use the tool, the amount of lacking content and standardized communication will indeed flood the internet, but this will only serve to contrast original content to the point where we will immediatly recognize the two apart, much like we do with advertising nowdays.
I’m not sure this is true. They could be trained based on published works prior to a certain date as the formal writing style, eg Project Gutenberg, then layer on the recent internet to better capture modern stylistic trends.
Ultimately, the models will always require fine tuning, and selecting which data set you use for early training has a very large impact on the overall performance of the model. Additional knowledge and trendiness can be learned after the fact.
I don’t believe this theory 100%, however it is true to some extent. At some point, ai language will plateau out and simply won’t get better. Once it’s at it’s max and has little to learn, they will be so human-like it won’t matter if it’s learning from itself. The percentage of influence would be so infinitesimal it practically won’t matter. At that point it wouldn’t be necessary to learn anymore, anyway.
We aren’t doomed to write like ai, different themes or stories require different nuances. It’s artistic. But it depends on the medium. Sure, resume’s, cover letters, memos, emails and whatever may become robotic (aren’t they already?) But creative stories won’t, to a great extent.
There are some in the research community that agree with your take: THE CURSE OF RECURSION: TRAINING ON GENERATED DATA MAKES MODELS FORGET
Basically the long and short of that paper is that LLMs are inherently biased towards likely responses. The more their training set is LLM generated, and thus contains that bias, the less the LLM will be able to produce unlikely responses, over time degrading the model quality throughout successive generations.
However, I tend to think this viewpoint is probably missing something important. Can you train a new LLM on today’s internet? Probably not, at least without some heavy cleaning. Can you train a multimodal model on video, audio, the chat logs of people talking to it, and even other better LLMs? Yes, and you will get a much higher quality model and likely won’t get the same model collapse implied by the paper.
This is more or less what OpenAI has done. All the conversations with 100M+ users are saved and used to further train the AI. Their latest GPT4 is also trained on video and image recognition, and they have also been exploring ways for LLMs to train new ones, especially to aid in alignment of these models.
Another recent example is Orca, a fine tune of the open source llama model, which is trained by GPT-3.5 and GPT-4 as teachers, and retains ~90% of GPT-3.5’s performance though it uses a factor of 10 less parameters.
AI might in the medium-term change our vernacular but it won’t be for the worse, generally, and most people won’t feel much of a change in most contexts.
I liken it to the invention of the steamshovel excavator. Now your average worker doesn’t need massive muscles to get work done quickly, but it’s not like shovels went away for good, it’s still used in parts of projects.
As for good human generated data for training and building AIs? It’s like wood from trees. We’ve gone through the “just cut down the nearest tree, it won’t matter, they’re everywhere” period. Soon we’ll enter a data farming period, just like with managed de-forestation, and with the value of task-specific data and LLMs now being obvious, we’re probably already there.
Hmmm … maybe that’s why big social are ratcheting up the prices for their APIs??!!
Honestly, it’s a little creepy how tangible is a Matrix like scenario, without the apocalyptic war part that is. Machines feeding of of our data and thinking (which was, IIRC, the original premise, not energy).
I think there is going to be some sort of local minima of quality when the humans and AI both train the next AI. But then the quality will likely start raising again as we figure out better cost functions. Current cost functions don’t just optimize output to be exactly like the training data. They allow for some variability like word order (as long as grammatically correct) and synonyms, but that’s about it. Maybe we’ll discover a better cost function later?
Yeah, basically a photocopy of a photocopy of a photocopy…
My take is that "L"LMs are already old news. I think targeted or limited data-set language models are going to be the next wave.
I think this partly because very few people can do LLMs at the scale of Microsoft and Google so I think smaller firms and people in their garage are going to aim their sights on smaller targeted data sets with a eye towards factual accuracy.
And then maybe link them/daisy chain them together. I hope there is this unix philosophy for models where they do one thing well but you can ‘pipe’ data from one to another.