While the motives may be noble (regulating surveillance) it might happen that models like Stable Diffusion will get caught in the regulatory crossfire to a point where using the original models becomes illegal and new models will get castrated until they are useless. Further this might make it impossible to train open source models (maybe even LoRAs) by individuals or smaller startups. Adobe and the large corporations would rule the market.

  • ottensio@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Full ban on Artificial Intelligence (AI) for biometric surveillance, emotion recognition, predictive policing
    Generative AI systems like ChatGPT must disclose that content was AI-generated. Those ones are concerning because they already use it at some degre but still you are the bad civilian for accessing military grade AI tech to generate anime pictures, what a bunch of hypocrites.

    • jugalator@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah, flagging AI generated content as such and strict demands of announcing when AI is used against your privacy would be something I’d agree with. But stifling AI itself is just moronic because it will make EU fall behind those who still use it or others without such regulations. Let’s hope it doesn’t come to anything like that.