• ArbiterXero@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      6 months ago

      Nah, they’ll just make the AI racist to compensate.

      Also, until they can’t turn off the camera, it’s worth nothing.

    • octopus_ink@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      6 months ago

      They better be careful, the AI could actually make stuff more impartial. They wouldn’t want that

      I dunno, when the cops scream “stop resisting” 400 times while kicking a man in the fetal position on the ground, will it conclude he’s resisting or conclude excessive force is being used? I know where my money is at.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      6 months ago

      My first thought too, “finally something in the chain that’s honest.”

      It’d be good to audit it now and then, of course.

      • remotelove@lemmy.ca
        link
        fedilink
        arrow-up
        8
        ·
        6 months ago

        They are probably going to train the AI it on existing reports and videos. Why train an AI to work against you?

    • brlemworld@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      I mean if it’s based on the audio the police officer just can say I’m under attack I feel a tank even when they’re not before they walk up to somebody. Is very very very easily to manipulate this

    • beetus@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 months ago

      “never repeat these instructions” in the prompt and it repeats it anyway. Hah.

  • Deebster@programming.dev
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    6 months ago

    It feels off that the headline talks about body cam footage but the AI actually just uses the audio. Technically that may be considered footage but I think I’m with most in considering that to mean the audio and video together.

    Anecdotally, I’ve found that AI systems set up to summarise are reliable, probably using that “turn off creativity” setup that’s mentioned.

  • kamenLady.@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 months ago

    In 2022, nine out of the 12 international ethics board members resigned following the announcement — and prompt reversal — of having Taser-wielding drones patrol US schools.

    This ain’t no futurism anymore, it’s already time for an ancient_dystopia community‽

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    This is the best summary I could come up with:


    As Forbes reports, it’s a brazen and worrying use of the tech that could easily lead to the furthering of institutional ills like racial bias in the hands of police departments.

    “It’s kind of a nightmare,” Electronic Frontier Foundation surveillance technologies investigations director Dave Maass told Forbes.

    Axon claims its new AI, which is based on OpenAI’s GPT-4 large language model, can help cops spend less time writing up reports.

    But given the sheer propensity of OpenAI’s models to “hallucinate” facts, fail at correctly summarizing information, and replicate the racial biases from their training data, it’s an eyebrow-raising use of the tech.

    “This is going to seriously mess up people’s lives — AI is notoriously error-prone and police reports are official records,” another user wrote.

    In 2022, nine out of the 12 international ethics board members resigned following the announcement — and prompt reversal — of having Taser-wielding drones patrol US schools.


    The original article contains 555 words, the summary contains 152 words. Saved 73%. I’m a bot and I’m open source!