The Technological “Gaydar” – The Problems with Facial Recognition AI

By: 

Anat Lior

Authored on: 
Thursday, January 11, 2018

Often, our environment shapes the way that we see ourselves. If those around us perceive us in a certain way, we may begin to act in accordance with those perceptions. This self-fulfilling prophecy is a significant part of our social lives and the way we interact with our environment. Now imagine that artificial intelligence (AI) software allowed society to publicly label important aspects of your identity from an early age. How would this label affect your self-perception? What effect would it have on your freedom of choice and autonomy? The purpose of this short post is to present three legal issues with the use of AI as a “technological gaydar”—(1) privacy infringement; (2) the use of AI-generated results as evidence in countries that criminalize homosexuality; and (3) the loss of individual autonomy.

This September, two researchers from Stanford University published the results of a controversial study that used artificial intelligence software to identify sexual orientation based only on facial features. The researchers went through more than 14,776 dating profiles (of both men and women) posted publicly on an American dating website. From those profiles, they compiled more than 35,320 public photos, each of which had a single face displayed clearly enough to analyze. The photos were also chosen to provide equal numbers of self-identified males and females, and gay and straight individuals. Then, building on basic facial recognition software, the algorithm searched for patterns in the photos to create a correlation between facial features and sexual orientation.

The results indicated that the self-trained algorithm could distinguish gay males from  straight males with 81% accuracy. The algorithm did the same for women with a 71% success rate. On the other hand, human intuition identified the individual correctly only 61% of the time for men and 54% of the time for women.

The researchers partly justified their use of AI to identify sexual orientation by claiming that their study was able to provide strong support for the prenatal hormone theory, which suggests a link between parent’s genetic features and hormone structure and the sexual orientation of their children. This claim has been met with vociferous criticism.

Some, including the researchers themselves, have pointed to various limitations of the study’s methodology and called into question the validity of the results. For example, the researchers used individuals with narrow demographic features in their study—white people who self-identified as gay or lesbian —with no consideration that gender may be thought of as non-binary, or that self-identified bisexuals were not also analyzed.

Even though this research is being critiqued as “junk science” and there is no certainty that it truly can accomplish what it aims to, the principles at stake are real. Facial recognition software is becoming a common tool in the biometric identification industry, making the use and deployment of facial recognition software much easier. When paired with AI, the new ability to detect personal attributes (e.g., political preference, IQ, etc.) based on external appearance poses troubling legal and non-legal issues. Some have already attempted to use AI facial recognition for other predictive purposes, such as identifying terrorists and potential lawbreakers.

The potential for algorithm bias cannot be overstated in these applications of AI. Algorithms can promote biased results when they’re based on biased data, which is quite likely considering that the initial data set is collected by humans and thus influenced by explicit and implicit prejudice. A machine learning algorithm learns how to make predictions by analyzing patterns in an initial data set, given to it at its “birth”. It then looks for similar patterns in new data it receives. But if these algorithms identify an inaccurate or biased pattern from a skewed initial data set, any future analysis of new information will be tainted by those biased patterns. In the study in question, the initial data set used to train the algorithm to predict sexual orientation is likely influenced by selection biases that would produce inaccurate results in future analyses or in other applications.

Apart from the threat of bias, this research study presents new concerns about privacy rights, especially in relation to the potential abuse of this tool by governments who still criminalize homosexuality. The right to privacy in this context may be divided into three elements: first, control over the information that was gathered; second, safeguarding the integrity of the information that was gathered; and third, the way the information is being used by the service or product provider. These three elements, described in more detail below, are deeply connected to the concept of user consent to terms and conditions of service and the question of whether authentic informed consent truly exists in the age of big data. The answer to this question has significant effects on our ability to fully protect our privacy.

First, the entity that gathers the information likely never obtained explicit permission from individuals to collect their photographs and use them to generate predictive algorithms. Some people may authorize these kinds of activities without clearly understanding them—for example, through user agreements that most sign without reading. However, the AI used in this research is not governed by the principles we’re accustomed to seeing in this space. Users are not supplying permission for data gathering in exchange for use of a service, or to provide information that may improve a company’s product. Rather, users would likely have never entered into a license agreement at all. In the United States, for example, police need not obtain individual consent to use facial recognition software. Frighteningly, facial recognition software can be employed without the user’s permission via public cameras or social media profiles. Emerging technologies will only make this easier. For example, the new iPhone X uses facial recognition in lieu of an entry password, replacing the fingerprint of previous iPhone models. The mapping data from this feature will likely be shared with third-party app developers. From there, the dissemination of a user’s facial data to other parties may very well be unavoidable.

Second, absent a formal user agreement, the entity who collected the information has no obligation to safeguard the integrity of the information. This presents considerable risk, such as identity theft. It also opens the door to our third concern—the potentially abusive applications of the software.

 Finally, there are many concerns about the application or potential use of biometric user data. If this technology is universally disseminated, anyone could be recognized on the basis of their biological data—including their facial features, fingerprints and even brain waves. And, as the number of analyzable biometric features increases, so does the risk of implicit bias. Further, these biometric tools could be used anytime or anywhere, especially if used in combined with other devices such as drones and automated vehicles.

This technology, in the hands of governments, could be weaponized as a tool of discrimination against citizens, even if it is done unintentionally or as the result of implicit bias. Specifically, facial recognition predicting one’s sexual orientation may have catastrophic effects on LGBTQ communities in countries where homosexuality is criminalized. For example, if applied in the Russian Republic of Chechnya, where a “gay purge” has been conducted over the last six months, this tool would have deadly consequences. As of May 2017, at least seventy-four countries still considered gay relationships to be illegal, punishable with life in prison or even death. Although there is some variation on the exact number of countries, the implications are clear—if this tool were to reach the wrong hands, such as government and law enforcement agencies in these countries, it could be used to devastate LGBTQ communities.

The mere fact these actions are considered illegal in some countries is beyond our current control. However, we should be sensitive as to whether we provide oppressive regimes tools to further their agendas. This fear dwarfs any other concern that may occur as a result of using this tool. If it will be used as a substantial evidence to prosecute people for being gay, it could lead to a new, technologically improved form of genocide. This notion leads me to believe there is no good reason in the social-political context of the world community today to develop such a tool.

Every individual has the right to express themselves, and in doing so, can gain self-fulfillment, self-realization and self-development. Choosing and accepting one’s sexual orientation is one of the most expressive things one can do. Depriving individuals of that autonomy may have harmful effects on their dignity and privacy. Individual autonomy lies at the foundation of human dignity, but that autonomy is infringed upon by this deterministic AI approach.

The collision of emerging technologies and civil rights is not foreign in this age of technological advancement. Nevertheless, we often tolerate intrusions on our privacy to enhance our quality of life with the help of these technologies. However, it is difficult to find a substantial positive aspect to this new AI software that could outweigh its detrimental effects. It is possible that there is a marginal benefit in allowing individuals to easily identify themselves without carrying identification. Perhaps this line of research may prove that facial features tell us more than we realize. However, it perpetuates the concept that our characteristics are preprogrammed and that we don’t have real control or choice over our behaviors. Undermining the basic perception of one’s identity and self-perception requires very stringent justification, due to the personal damage that may be inflicted upon one’s autonomy.

It is true that consumers are constantly being labeled in order to be offered highly targeted political and commercial advertising. However, this sort of labeling carries a different set of implications than being labeled as part of the LGBTQ community, which may have a greater effect on one’s family, career projection, ability to rent an apartment or live in a certain neighborhood, health, experiences with discrimination, and more.

Of course, this assumption is context-dependent. It depends on an individual’s place of birth; their family’s religious beliefs; the degree of social conservativeness in their country; whether homosexuality is considered a crime where they live; whether that crime is being enforced; and perhaps other influential aspects of our environment that we have yet to discover. Being gay or being labeled as such may have vast implications for one’s life. It is one thing if an individual has taken that decision upon himself or herself. It is another thing completely if he or she were labeled as such by a “technological gaydar”—regardless of whether that label is, in fact, accurate.

Technology is supposed to enhance our quality of life and, in an ideal world, promote equality. As these technologies continue to develop, we will have to answer the question of whether this path is worth taking, or whether there are certain aspects of our lives that should be left technologically neutral.