Facial Recognition: Racial Disparity And Accountability

I can’t believe I forgot to post this! A couple of months ago when I was still running for Stanford’s track team, I started thinking pretty seriously about trying to develop a facial recognition algorithm for the purposes of automatically tagging photos.

Every fall we have our “Media Day” photos. We take regular old headshots, but we also have the opportunity to take some pretty fun headshots. This is obviously a super fun part of the whole collegiate athletics thing because it doesn’t involve running until your field of vision grows dark or balancing classes a precariously as humanly possible during competition season. It’s just a moment to have fun and feel a little special.

With all the excitement, you can imagine that my teammates and I get pretty excited to get our photos back so we can post them and whatnot. For those of us that take advantage of NIL opportunities, these photos can very well be a chance to post and attract attention to our **dazzling** personalities and garner paying sponsorships. Or for me, send them to my parents so that my grandma can post me on her facebook page. So yeah… the stakes can be pretty high!

I don’t think my grandma was particularly eager to post this one…

The tricky part: There are hundreds or thousands of these photos just from our team, and probably the same amount or more from all 36 varsity sports teams at Stanford. So it takes a long time to take all these images, do photo tagging (some photos have multiple subjects at once!), sort by team, sort by people in the photos, and distribute these photos. It dawned on me that a computational solution powered by AI facial recognition could be super useful. So I developed one.

I was initially thrilled with the accuracy. It was even able to distinguish between the identical twins we had on the team (turns out this wasn’t as impressive a feat as it seemed, but that’s a story for another day). But as I tried to finetune a few hyperparameters I discovered a serious issue. The algorithm seemed to perform worse for subjects with darker skin tones. Many of my Black teammates were being miscategorized.

When Spring came around, I took CS 281: Ethics of Artificial Intelligence. For our final project we were supposed to do a project exploring some theme we touched on in class. I thought this would be an excellent opportunity to more formally explore the issues I found in the original project.

To find out more about how this all went and read my super cool paper, click the link below.

Previous
Previous

Introducing: Davenport

Next
Next

Image Perturbation (pt 1)