282
Computers make mistakes and AI will make things worse — the law must recognize that
(www.nature.com)
This is a most excellent place for technology news and articles.
The "AI" that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.
In the above case, any details about the people aren't needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset