I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.
One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.
Yes, thank you for this comment.
One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
No, it’s really nothing like reading at all. Your example requires a human element. This is just the consumption of data, not reading.
Humans are the ones making these models. It’s not entirely the same thing, but you should read this article by the EFF.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.