I’m a straight white guy, which is equal parts boring and pedestrian. Because we all tribe together, that means the majority of my friends are straight.
<This is where I should insert a note to tell you that I have friends who aren’t straight, and that I’d make a good to great VP of Diversity, even though I’m white, male and straight. I’m not going to do that, instead I’ll let you soak on the irony that non-diverse people love to insert context here.>
You know what a lot of my straight friends think they have? Laser-focused Gaydar. Gaydar is defined by Wikipedia as follows:
Gaydar (a portmanteau of gay and radar) is a colloquialism referring to the intuitive ability of a person to assess others’ sexual orientations as gay, bisexual or heterosexual. Gaydar relies almost exclusively on non-verbal clues and LGBT stereotypes. These include the sensitivity to social behaviors and mannerisms; for instance, acknowledging flamboyant body language, the tone of voice used by a person when speaking, overtly rejecting traditional gender roles, a person’s occupation, and grooming habits.
The only thing that I would add to that definition is that almost everyone I know believes their Gaydar is 100% accurate, which can’t be correct. What they’re missing is the FALSE NEGATIVES, which is to say they can make the easy calls, but they never know all the times their Gaydar failed them.
You know whose Gaydar is really good? Apparently the Gaydar of Artificial Intelligence – More from CNBC:
“Artificial Intelligence (AI) can now accurately identify a person’s sexual orientation by analyzing photos of their face, according to new research.
The Stanford University study, which is set to be published in the Journal of Personality and Social Psychology and was first reported in The Economist, found that machines had a far superior “gaydar” when compared to humans.
The machine intelligence tested in the research could correctly infer between gay and straight men 81 percent of the time, and 74 percent of the time for women. In contrast, human judges performed much worse than the sophisticated computer software, identifying the orientation of men 61 percent of the time and guessing correctly 54 percent of the time for women.”
It’s been said that technology and data do not discriminate. But A.I. changes all of that, because it learns as it goes along, which means that A.I. is going to start discriminating – in this case, against the LGBT crowd. It won’t be called discrimination, however – it will be called “opportunity maximization”. Here are 3 ways A.I. will ultimately learn to discriminate against the LGBT and non-LGBT crowds as a result of the learning:
1–A hiring manager spends large amounts of time watching Fox News and is a member of a Church that would be considered less than friendly to the LBGT community. Since A.I. can now ID who’s gay, it learns over time that career outcomes are never good with this type of hiring manager, so it stops feeding a gay man jobs with said type of hiring manager.
2–An image scan of LinkedIn shows that the managerial ranks of a company include an industry-low number of LBGT employees. As a result, A.I. fails to deliver the company’s managerial jobs to LBGT candidates as a priority, instead showing them on the 8th page of Google for Jobs search results.
3–A gay hiring manager has a track record of success hiring and promoting high performing LBGT community members – which is a good thing. BUT, the search results delivered to this manager always prioritize the type of candidate who’s outperformed expectations with him in the past – as a result, he sees a disproportionate amount of LBGT resumes when compared to the marketplace.
Are these examples of discrimination? Some would say no, because numbers and data don’t lie – these are just examples of A.I. simply trying to maximize business results based on the increasing treasure trove of data that will come online over time.
But, in maximizing business results, A.I. takes opportunity away from LGBT and non-LBGT candidates. And this is only one example – take any protected or non-protected class, and A.I will be able to ID the segmentation.
The gaydar of A.I. is strong. What wins as a result? Business results and outcomes, or equal opportunity?
We shall see.
Kris Dunn is a Partner and CHRO at Kinetix, a national RPO firm for growth companies headquartered in Atlanta. He’s also the founder Fistful of Talent (founded in 2008) and The HR Capitalist (2007) – and has written over 70 feature columns at Workforce Management magazine. Prior to his investment at Kinetix, Kris served in HR leadership roles at DAXKO, Charter and Cingular. In his spare time, KD hits the road as a speaker and gives the world what it needs – pop culture references linked to Human Capital street smarts.