Have you heard of ImageNet Roulette? Chances are, some of the people you follow in social media have tried it out. In some cases, ImageNet Roulette produces some controversial, and cruel, results. This is a feature, not a bug!
ImageNet Roulette was launched earlier this year as part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data.
The ImageNet Roulette website provides further explanation. ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet, which was developed at Princeton and Stanford Universities in 2009. It is one of the most widely used training sets in machine learning and research development.
The AI that is trained on the ImageNet dataset will base its responses to each selfie on the information in that dataset. This posed a dilemma for the researchers who released ImageNet Roulette.
One of the things we struggled with was that if we wanted to show how problematic these ImageNet classes are, it meant showing all the offensive and stereotypical terms they contain. We object deeply to these classifications, yet we think it is important that they are seen, rather than ignored or tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.
A warning appears near the top of the ImageNet website: ImageNet Roulette regularly returns racist, misogynistic and cruel results. It points out that this is because of the underlying data set it is drawing on, which is ImageNet’s “Person” categories. ImageNet is one of the most influential training sets in AI. ImageNet Roulette is a tool designed to show some of the underlying problems with how AI is classifying people.
If you put a selfie on ImageNet Roulette, and received racist, misogynistic, or cruel results, you may have felt hurt or offended. This is because the AI was basing its responses on information from a dataset that included very negative classifications of people. It seems to me that the point of ImageNet Roulette was to emphasize that AI cannot be unbiased if the data it has to work with is biased. What better way to make that clear than by letting people post their results to social media?
The ImageNet Roulette project has officially achieved its aims. It will no longer be available online after September 27, 2019. It will, however, remain in circulation as a physical art installation (which currently is on view at the Fondazione Prada Osservertario in Milan until February 2020).
One thought on “ImageNet Roulette Reveals that AI Can Be Biased”
Comments are closed.