Category Archives: AI

HireVue Uses a Face-Scanning Algorithm to Decide Who to Hire



It has been said that the robots are coming to take your job. Not as much has been said about artificial intelligence being used to sort through job applicants and determine who to hire. A recruiting-technology firm called HireVue does just that. Your next job interview might require you to impress an algorithm instead of an actual human.

The Washington Post has a lengthy, detailed, article about HireVue and the ethical implications of its use. According to the Washington Post, more than 100 employers now use the HireVue system, including Hilton, Unilever, and Goldman Sachs, and more than a million job seekers have been analyzed. The use of HireVue has become so pervasive in the hospitality and finance industries that universities are training students on how to look and speak for the best results.

But some AI researchers argue the system is digital snake oil – an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact. Analyzing a human being like this, they argue, could end up penalizing nonnative speakers, visibly nervous interviewees, or anyone else who doesn’t fit the model for look and speech.

According to The Washington Post, the AI in HireVue’s system records a job candidate and analyzes their responses to questions created by the employer. The AI focuses on the candidate’s face moves to determine how excited someone feels about a certain work task or to see how they would handle angry customers. Those “Facial Action Units” can make up 29 percent of a person’s score. The words they say and “audio features” of their voice make up the rest.

This situation makes me think of ImageNet Roulette, an AI that was trained on the ImageNet database. People posted selfies to ImageNet Roulette, and the AI gave them problematic classifications. You may have seen people sharing their selfies on social media, and noticed the racist, misogynistic, and cruel labels that the AI added.

The purpose of ImageNet Roulette was to make it abundantly clear that AI can be biased (and cruel) if it was using a dataset that included very negative classifications of people. From this, it seems to me that it is entirely possible that hiring decisions made by AI such as HireVue could be very biased for or against certain types of people. I would like to see some research done to determine who the HireVue AI favors – and who it is intentionally excluding.


ImageNet Roulette Reveals that AI Can Be Biased



Have you heard of ImageNet Roulette? Chances are, some of the people you follow in social media have tried it out. In some cases, ImageNet Roulette produces some controversial, and cruel, results. This is a feature, not a bug!

ImageNet Roulette was launched earlier this year as part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data.

The ImageNet Roulette website provides further explanation. ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet, which was developed at Princeton and Stanford Universities in 2009. It is one of the most widely used training sets in machine learning and research development.

The AI that is trained on the ImageNet dataset will base its responses to each selfie on the information in that dataset. This posed a dilemma for the researchers who released ImageNet Roulette.

One of the things we struggled with was that if we wanted to show how problematic these ImageNet classes are, it meant showing all the offensive and stereotypical terms they contain. We object deeply to these classifications, yet we think it is important that they are seen, rather than ignored or tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.

A warning appears near the top of the ImageNet website: ImageNet Roulette regularly returns racist, misogynistic and cruel results. It points out that this is because of the underlying data set it is drawing on, which is ImageNet’s “Person” categories. ImageNet is one of the most influential training sets in AI. ImageNet Roulette is a tool designed to show some of the underlying problems with how AI is classifying people.

If you put a selfie on ImageNet Roulette, and received racist, misogynistic, or cruel results, you may have felt hurt or offended. This is because the AI was basing its responses on information from a dataset that included very negative classifications of people. It seems to me that the point of ImageNet Roulette was to emphasize that AI cannot be unbiased if the data it has to work with is biased. What better way to make that clear than by letting people post their results to social media?

The ImageNet Roulette project has officially achieved its aims. It will no longer be available online after September 27, 2019. It will, however, remain in circulation as a physical art installation (which currently is on view at the Fondazione Prada Osservertario in Milan until February 2020).


Microsoft Invests in and Partners with OpenAI



Microsoft has formed a multiyear partnership with OpenAI. Microsoft has invested $1 billion and will focus on building a platform that OpenAI will use to create new AI technologies.

Microsoft Corp., and OpenAI, two companies thinking deeply about the role of AI in the world and how to build secure, trustworthy and ethical AI to serve the public, have partnered to further extend Microsoft Azure’s capabilities in large-scale AI systems. Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence (AGI). The resulting enhancements to the Azure platform will also help developers build the next generation of AI applications.

The partnership covers the following:

  • Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies
  • OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence
  • Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies

The press release states that Microsoft and OpenAI will build a computational platform in Azure which will train and run increasingly advanced AI models, include hardware technologies that build on Microsoft’s supercomputing technology, and adhere to the two companies’ shared principals on ethics and trust. Their intent appears to be to create the foundation of advancements in AI to be implemented in a safe, secure and trustworthy way.

OpenAI states that they and Microsoft have a mission to ensure that artificial general intelligence (AGI) benefits all of humanity. This requires ensuring that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is shared.

I’m willing to believe that OpenAI and Microsoft are being honest in their motivations. My concern is that they may be unable to prevent the problem of having biased data unintentionally seeping into their AGI. I’m very curious to see precisely how the economic upside of their AGI is shared and who it is shared with.


Your Flickr Photos May Have Been Used for Facial Recognition



It has become common for people to post selfies, and photos of friends and family, online. Professional photographers who use models may post their photos in an online portfolio. Unfortunately, photos that include people’s faces are being used without permission by researchers who want to create facial recognition algorithms.

NBC News reported that, in January of 2019, IBM released a collection of nearly a million photos that were taken from Flickr and coded those photos to describe the subject’s appearance. According to NBC News, IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.

I personally feel that there are a lot of ethical problems with what IBM has done. The most obvious one is that it didn’t ask the photographers if it could use their photos.

A company as large as IBM has the money to pay photographers for the use of their photos. Stealing other people’s art is wrong. IBM is also big enough to hire a few people to get consent forms from the people who are in the photographs.

Another ethical problem is that facial recognition software is controversial. It evokes a “Big Brother is watching you” kind of feeling. Personally, I would feel disgusted if my face was used to train facial recognition software.

In July of 2018, the ACLU tested Amazon’s facial recognition tool (called “Rekognition”). It incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. False matches could result in police arresting the wrong person.

NBC News reported that IBM said Flickr users can opt out of the database. However, NBC News discovered that it’s almost impossible to get photos removed.

Now would be a good time to make your Flickr and Instagram accounts private. Don’t let grabby companies steal your photos and use them in an ethically questionable algorithm.


AI is Coming to Take Your Jobs



President Donald Trump has signed an Executive Order on Maintaining American Leadership in Artificial Intelligence.

Reuters summarized it as “an executive order asking federal government agencies to dedicate more resources and investment into research, promotion and training on artificial intelligence, known as AI.” Reuters pointed out that there was no specific funding announced for the initiative.

According to Reuters:

AI and deep machine learning raise ethical concerns about control, privacy, cybersecurity, and is set to trigger job displacements across industries and companies experts say.

The executive order comes after the White House held a meeting on AI in May with more than 30 major companies including Ford Motor Co., Boeing Co., Amazon.com, Inc., and Microsoft Corp.

Personally, this makes me feel uncomfortable. I’ve no idea what these companies (and others like them) will spend on replacing their current systems with AI – but I suspect it will cost them less than paying a human worker to do the same job. Robots and AI systems don’t need sick days, or health insurance coverage, or raises.

The executive order appears to require grants for training programs in high school, undergraduate programs, graduate fellowship, and alternative education. It does not include any AI training for people who are currently working in industries that are likely to invest in AI.

American workers now have to worry not only about robots coming to take their jobs, but also being replaced by AI.


DeepMind’s AlphaZero Beats State-of-the Art AI in Chess



DeepMind introduced AlphaZero in 2017. It is a single system that taught itself how to master chess, shogi, and Go, beating state-of-the-art programs in each case. AlphaZero has developed a ground-breaking, highly dynamic, and unconventional style of play.

A report titled: “A general reinforcement learning algorightm that masters chess, shogi an Go through self-play” was published by Science Magazine. Part of the report said: “The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system.”

AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm.

In chess, AlphaZero first outperformed Stockfish after just 4 hours. AlphaZero defeated the 2016 TCEC (Season 9) world champion Stockfish, winning 155 games and losing just six games out of 1,000.

In shogi, AlphaZero first outperformed Elmo after 2 hours. AlphaZero defeated the 2017 CSA world champion version of Elmo, winning 91.2% of games.

In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games.

I wonder if, in the future, eSports will include competitions between AlphaZero and various other AI algorithms. It seems to me that people who love to play chess are very interested in AlphaZero and what it can do. I can see potential for chess players to learn some of AlphaZero’s strategies in an effort to improve their game.


Would You Let AI Choose Your Child’s Babysitter?



Parents want to find a reliable, experienced, and compassionate person to babysit their child. Some parents find that person in a relative or a very close friend. Others will ask for recommendations from other parents that they know and trust. This system of vetting potential babysitters has been used for a very long time.

The Washington Post has an article about a company called Predictim. That same article was also posted on McCall.com.

Predictim offers an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality. It scans through the potential babysitter’s Facebook, Twitter, and Instagram posts, and gives an automated “risk rating”.

According to the article, the “risk rating” can indicate the risk of the babysitter being a drug abuser. It also assesses the risk of the babysitter for bullying, harassment, being disrespectful, and having a bad attitude.

It does not gather any information about how long the person has been a babysitter. It doesn’t ask if the babysitter has a degree in Early Childhood Education, or Teaching. It doesn’t find out of the babysitter knows CPR, has worked with children who have special needs, or has worked in a daycare center.

The article says that price of a Predictim scan starts at $24.99. It requires a babysitter’s name, email address, and her consent to share broad access to her social media accounts. Babysitters who decline are told that “the interested parent will not be able to hire you until you complete this request.”

In my opinion, as a person who has a teaching degree and who has spent years working in daycare, the Predictim analysis is both dangerous and misleading. What does Predictim do with the data they collect from babysitter’s social media accounts? Will this data be shared with other employers in other fields? What if this data is leaked or stolen by nefarious people?

Many babysitters are teenagers, and I question the ethics of gathering personal data from people who are not adults. Does Predictim get permission from those people’s parents before grabbing information from the teenager’s social media account?

Another huge problem with using AI is that it tends to pick up the biases of whomever created it. Predictim’s AI could wind up excluding babysitters who are people of color, LGBT, of certain religious or ethnic backgrounds, or simply not photogenic enough in their Instagram posts.

Image by Pexels