It’s easy to start fretting about the coming robo-takeover and all its attendant Terminator and I, Robot-esque predictions about how artificial intelligence is coming for humanity. But for the moment, the real concerns about new technology aren’t so different from the old concerns: systemic bias against people of color.
A new study from the Georgia Institute of Technology suggests that automated vehicles might be better at recognizing light-skinned people than dark-skinned people. According to their research, self-driving cars are five percent less likely to detect someone with dark skin than someone with light skin.
“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” Jamie Morgenstern, one of the researches, told Vox.
One caveat here. The study had to use several different publicly available object-detection models instead of the exact one used by companies actually making self-driving cars. That’s because those companies don’t make their own data available to the public, which is something of a concern in and of itself.
But assuming that these companies’ own data is at least somewhat similar to the most commonly used datasets employed by everyone else, it’s safe to assume that the utopian future of driverless cars has a few serious issues for non-white people. Researchers hasten to note that this doesn’t mean that programmers are intentionally trying to keep white people safer — just that their own implicit biases work their way into the algorithms they create.
The fix for this, according to Kartik Hosanagar, author of A Human’s Guide to Machine Intelligence, is for teams to make sure they ensure racial and gender diversity on their staffs.
Tyler Huckabee is RELEVANT's executive editor. He lives in Nashville with his wife, dog and Twitter account.