When Your AI Suffers From Dunning–Kruger Effect

When Your AI Suffers From Dunning–Kruger Effect

A similar case applies when security protocols ask you questions based on "authoritative" sources (not known to you) in order to confirm your identity.

Originally shared by Rick Wayne (Author)

Death by Proxy

What Google was doing is something that’s now commonplace for tech products: It was using proxies. A proxy is a stand-in for real knowledge—similar to the personas that designers use as a stand-in for their real audience. But in this case, we’re talking about proxy data: When you don’t have a piece of information about a user that you want, you use data you do have to infer that information. Here, Google wanted to track my age and gender, because advertisers place a high value on this information. But since Google didn’t have demographic data at the time, it tried to infer those facts from something it had lots of: my behavioral data.

The problem with this kind of proxy, though, is that it relies on assumptions—and those assumptions get embedded more deeply over time. So if your model assumes, from what it has seen and heard in the past, that most people interested in technology are men, it will learn to code users who visit tech websites as more likely to be male. Once that assumption is baked in, it skews the results: The more often women are incorrectly labeled as men, the more it looks like men dominate tech websites—and the more strongly the system starts to correlate tech website usage with men.
http://nautil.us/issue/52/the-hive/google-thought-i-was-a-man?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication

Comments

Popular posts from this blog

Entremet

Flushbunkingly Gloriumptious

Originally shared by Kam-Yung Soh