Hi, I like to learn about what resources are out there on the internet. I hope you have found my posts useful!

  • 11 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
















  • It would be difficult indeed, but without a doubt they will still try and cause massive damage to our basic freedoms. For example, imagine if one day all chips require DRMs at the hardware level that cannot be disabled. This is just one example of the damage they could do. There isn’t much any consumer can do if they do this since developing your own GPU is nearly impossible.


  • cll7793@lemmy.worldOPtoLocalLLaMA@sh.itjust.works(Deleted for not relevant anymore)
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    They are requesting for something beyond watermarking. Yes, it is good to have a robot tell you when it is making a film. What is particularly concerning is that the witnesses want the government to keep track of every prompt and output ever made to eventually be able to trace its origin. So all open source models must somehow encode some form of signature, much like the hidden yellow dots printers produce on every sheet.

    There is a huge difference between a watermark stating that “this is ai generated” and having hidden encodings, much like a backdoor, where they can trace any pubicly released ai image, video, and perhaps even text output, to some specific model, or worse DRM required “yellow dot” injection.

    I know researchers have already looked into encoding hidden undetectable patterns in text output, so an extension to everything else is not unjustified.

    Also, if the encodings are not detectable by humans, then they have failed the original purpose of making ai generated content known.