PRO+ Premium Content/Business Information

Thank you for joining!
Access your Pro+ Content below.
February 2018, Vol. 6 No. 1

Big data throws bias in machine learning data sets

Say you're training an image recognition system to identify U.S. presidents. The historical data reveals a pattern of males, so the algorithm concludes that only men are presidents. It won't recognize a female in that role, even though it's a probable outcome in future elections. This latent bias is one of the many types of biases that challenge Data scientists today. If the machine learning data set they use in an AI project isn't neutral -- and it's safe to say almost no data is -- the outcomes can actually amplify discrimination and bias in machine learning data sets. Visual recognition technologies that label images require vast amounts of labeled data, which largely comes from the web. You can imagine the dangers in that -- and researchers at the University of Washington and University of Virginia confirmed one poignant example of gender bias in a recent report. They found that when a visual semantic role labeling system sees a spatula, it labels the utensil as a cooking tool, but it's also likely to refer to the person in ...

Access this PRO+ Content for Free!

Features in this issue

Columns in this issue