AI Uses Photos To Determine Sexual Orientation

A very insightful article indeed!” – Michal Kosinski

Imagine going to a party, where a friend takes a picture on her mobile phone of someone entering a room. As she turns the phone for you to see the screen, you notice the tag beneath the newly taken photo indicating the man’s sexual orientation. Now imagine a family member who uploads a photo of a spouse, child, aunt or uncle into an app to determine if that person is gay or straight. While you might believe an app can’t do that, a new article reveals that belief is wrong!

This morning I read an extremely interesting article by two researchers from Stanford University. They built an AI program that looks at photos of people to determine their sexual orientation. Interestingly, when people are asked to determine someone’s sexual orientation using a photo alone, they get it right about 50 to 60 percent of the time, which is only marginally better than guessing or flipping a coin. Compare the human performance against the results of the AI solution that the researchers developed. The AI app properly determines someone’s sexual orientation – using nothing more than a photo – up to 80% of the time (a number that increases to 90% if more photos are used)!

If using nothing more than a photo humans are only slightly better than flipping a coin in determining someone’s sexual orientation, what makes us believe that these researchers got it right? I reviewed their paper and found that the authors consider the key hallmarks of machine learning:

  1. They used a logistic regression prediction model and a tool called VGG–Face that was previously trained using 2.6 million images to prevent overfitting,
  2. They used a good–sized data set using photos to classify sexual orientation,
  3. They used different data for training than used for testing,
  4. Finally, they performed another important step called cross–validation.

In short, they did the things you’re supposed to do when building reliable machine learning–based solutions.

For many, the idea that you can determine someone’s sexual orientation simply by looking at them is not only worrisome but brings up distasteful images of Nazi’s who used calipers to measure the cranial size of people to determine if someone was German. Not only distasteful, they used their measurements as part of a eugenics program to decide who lived and who died – resulting in one of the most horrific genocides in modern history. It’s inaccurate to believe that such classifications stopped with the end of World War II because as Trevor Noah notes in his book, Born A Crime, such racial and ethnic classifications continued in South Africa until the end of apartheid in 1991. So their findings should and will invoke an emotional reaction.

However, unlike people, AI doesn’t make a judgment or have an emotional reaction. From a computer perspective, the only thing that changes are the categories used in the training set. So, if the computer is asked to classify an image into two categories: straight and gay, and it has the data and learning approach; it can find a pattern – even if a human can’t. While a computer may not make an inherent judgment about the classification, people can and will.

In earlier articles, I’ve said that technology can be used for good or bad purposes. AI does what it’s been told or taught. Machine Learning does what it has learned. The subtlety is that with machine learning, the computer might pick up a nuance (or nuances) that a human doesn’t know is there. That is part of AI’s strength. This is why an AI–based solution can use a photo to detect medical problems or look at photos to classify if a mole on your arm is cancerous or benign. Clearly, few would stand in the way of advancing medical applications that improve people’s lives.

But when used improperly, the risks can be life threatening instead of life–saving, especially when that same photo is used to determine someone’s sexual orientation. As the authors note, the identification of one’s sexual orientation “could have serious and even life-threatening implications” for someone in a family, cultural group, or living in a country intolerant of homosexuality. It’s not a stretch of the imagination to envision such an app when you consider the capabilities in Blippar or Microsoft’s Seeing AI that exist today in classifying what your smartphone is looking at through its camera lens.

There are important business implications to consider. Because a human can’t easily detect the same features a computer might use in making its classification, companies will have to guard against releasing AI–based solutions that might contain inherent biases. This is possible when a solution consists of hundreds or thousands of features and their relationship is not obvious, making any biases hard to detect in advance through testing.

Employers will also have to consider mobile apps that their employees download into their personal devices. It is not hard to imagine that an innocent photo from a company event or party could inadvertently reveal another employee’s sexual orientation. Company policies will have to be updated to account for this rapidly advancing technology.

The authors admit that they struggled to reach a decision whether to publicly share their findings. I am glad that they published their results because if they hadn’t someone could develop an app, release it into the world, and place everyone in a reactionary position. Rather than making decisions from a reactionary position, we have an opportunity to be thoughtful and proactive.

Update 1 [Sept. 13, 2017] Since the publication of their paper, the authors have come under scrutiny by various LGBTQ+ and human rights groups. Clearly unhappy with the results and implications, these groups call the paper”dangerous,” “flawed,” “junk science.” While I understand the emotional reaction, their criticisms are not scientific and disregard issues the authors address in the original paper. It’s important to note that the authors say they would not object if their findings are found wrong.

Contrary to the assertions that their approach was not sound, their findings are mathematically grounded. In fact, I found only two places where I believe they reach unsupported conclusions. First, the authors assert that their findings support the prenatal hormone theory (PHT) of sexuality. This was not the focus of their research and it is a huge leap of faith to assert that characteristics in a photo alone align with one’s genetic makeup. Current research only shows that a relationship in the opposite direction is true; that you can look at one’s genome and predict what that person looks like. While theoretically possible to move from photo to genotype, their paper does not provide sufficient evidence to show this is true.

Second, we have to guard against mistaking correlation from causation. Although they are able to link photos containing phenotypical (eg, visual) characteristics to sexual orientation, the visual cues may be the result, not indicative of a cause. For example, after someone identifies with a certain group, it is quite possible that they learn to take on the behaviors, mannerisms and other characteristics of that group. This means that we have to guard against drawing conclusions not present in their work.

Update 2 [Sept. 13, 2017] Added the quote about this article/blog post that was provided by one of the paper’s authors, Michal Kosinski. 

––––

Steve_aes-114Steven B. Bryant is a researcher and author who investigates the innovative application and strategic implications of science and technology on society and business. He holds a Master of Science in Computer Science from the Georgia Institute of Technology where he specialized in machine learning and interactive intelligence. He also holds an MBA from the University of San Diego. He is the author of DISRUPTIVE: Rewriting the rules of physics, which is a thought–provoking book that shows where relativity fails and introduces Modern Mechanics, a unified model of motion that fundamentally changes how we view modern physics. DISRUPTIVE is available at Amazon.com, BarnesAndNoble.com, and other booksellers!

Photo courtesy of Pixabay.com.