One of the important topics in computing these days is diversity. Why? Well there are things like concern about equal opportunity. (Why Can’t Silicon Valley Solve Its Diversity Problem?) and that is part of it. It’s often hard to make people understand why that lack of diversity is a problem. People are only starting to understand how bias creeps into software. It creeps in because we all have biases – some more subtle than others. And that influences how we write software.
Here are a couple of issues that have come up based on race for example.
- Facial Recognition Is Accurate, if You’re a White Guy
- A beauty contest was judged by AI and the robots didn't like dark skin
- Why Can't This Soap Dispenser Identify Dark Skin?
It should be obvious that something like facial recognition should be tested with a wide variety of people with different faces and skin tones. Right? Well apparently it is not so obvious.
Some early color computer monitors used a mix of red and blue colored letters. This is actually a problem because there are a surprising number of people with red/blue color blindness. No one on the design team had the problem of course so it slipped by until the product was released.
There is a story, I haven’t been able to verify it but it is a good example, that early models of the Apple Newton had a very good handwriting reader. Well until they handed it to a left handed person and it could not read their handwriting.
I’ve been talking about testing and debugging with my students lately. It seems like a logical place to talk about algorithmic bias and the need for testing with a diverse population. We talk about how different viewpoints also contribute to more and different ways of looking at problems.
I feel like this is an important topic to cover. Having a more diverse population in computing is clearly an issue of fairness and that is enough of a reason to promote it. But I don’t think it hurts to point out that diversity also results in better software which benefits all of us. If we don’t explain this and teach it than I don’t think all of our students will understand this on their own. Some will of course but it is too important a topic to leave to chance.
BTW there is a regular Twitter chat on ethics and computing using the #EthicalCS hashtag. Highly recommended.
For more on the topic of bias in algorithms you may want to visit the Algorithmic Justice League.
Reading this article made me think about having an AI draw political boundaries. Sounds like a great until unless you let partisan values determine how the AI learns what the best way to do things is.
ReplyDeleteKeeping Robots Friendly: Meet The Woman Teaching AI About Human Values via @forbes https://www.forbes.com/sites/andreamorris/2018/02/07/keeping-robots-friendly-meet-the-woman-teaching-ai-about-human-values/#149e3c1160f9