No it didn’t, if there was oversight they saw the target and didn’t care.
AI is not the problem, its the scapegoat. They want to be able to shrug and point at AI, saying there was a misclassification and that it wasn’t their fault. Meanwhile they ignore the fact that a human at any point in time could have stopped the attack or double checked the target. They chose not to because they don’t care. Collateral damage, wanton destruction, and civilian casualties is the goal.
This is the WHOLE point of why these generative models have been pushed so hard the past couple of years. They tested the waters to see if people would accept “it’s the computer’s fault” as an acceptable excuse, and then slammed on the gas.
Accountability sinks, as Dan Davies has named them, are the whole point. It’s everything a slimy corporate CEO or government official has ever wanted.
No it didn’t, if there was oversight they saw the target and didn’t care. AI is not the problem, its the scapegoat. They want to be able to shrug and point at AI, saying there was a misclassification and that it wasn’t their fault. Meanwhile they ignore the fact that a human at any point in time could have stopped the attack or double checked the target. They chose not to because they don’t care. Collateral damage, wanton destruction, and civilian casualties is the goal.
This is the WHOLE point of why these generative models have been pushed so hard the past couple of years. They tested the waters to see if people would accept “it’s the computer’s fault” as an acceptable excuse, and then slammed on the gas.
Accountability sinks, as Dan Davies has named them, are the whole point. It’s everything a slimy corporate CEO or government official has ever wanted.