We’ve already got numerous examples of how these ai models and face recognition models tend to have biases or are fed data that accidentally has a racial bias. Its not a stretch of the imagination to see how this can go wrong.
Not that I’m aware of.
But we know the criminal justice system currently has biases. If the data the “AI” is trained on was affected by these biases, or others that we don’t realize, then it will produce biased results.
One of the worst parts about all this to me is that the AI and the dataset used to trained it are kept secret as proprietary information, and the police and governments buy it anyway despite that nobody can even try to check the code or dataset to see what biases or errors it might have (and definitely does).
Great! Automated discrimination against the poor
do ‘the poor’ all drive really bad?
We’ve already got numerous examples of how these ai models and face recognition models tend to have biases or are fed data that accidentally has a racial bias. Its not a stretch of the imagination to see how this can go wrong.
No, but any automated system can be used to punish people who cannot afford to fight it.
Well to be fair, this is because of the stupid justice system in the US.
Just the term “afford to fight for it” is something that never should exist in a civilized society.
I agree, but can we not with the r word?
Yes I can edit it, didn’t mean to offend.
Not that I’m aware of.
But we know the criminal justice system currently has biases. If the data the “AI” is trained on was affected by these biases, or others that we don’t realize, then it will produce biased results.
One of the worst parts about all this to me is that the AI and the dataset used to trained it are kept secret as proprietary information, and the police and governments buy it anyway despite that nobody can even try to check the code or dataset to see what biases or errors it might have (and definitely does).