AI is just mimicking Its training data. If the training data teaches it something is wrong, that is something it has “learned” from humans. If its training data is racist, it will be racist.
There have been issues in the past with software recommending harsher penalties or stronger surveillance on minorities because the training data used was from people who gave harsher penalties and stronger surveillance to minorities.
I bring this up because the statement “Even AI knows when something is wrong” implies that these racist models are okay because the AI doesn’t think it’s wrong.
AI is just mimicking Its training data. If the training data teaches it something is wrong, that is something it has “learned” from humans. If its training data is racist, it will be racist.
There have been issues in the past with software recommending harsher penalties or stronger surveillance on minorities because the training data used was from people who gave harsher penalties and stronger surveillance to minorities.
I bring this up because the statement “Even AI knows when something is wrong” implies that these racist models are okay because the AI doesn’t think it’s wrong.