Earlier this month I discovered a presentation by Princeton’s Professor Arvind Narayanan. It was titled “How to recognize AI snake oil”. It is available, at the time of this writing, on his Princeton webpage, and I think it is well worth the read. I found his piece compelling and timely: consumers in all spaces read or hear daily about products that incorporate AI. I’d like to explore some of the ideas in this paper from the lens of Human Capital Management. I’ll also discuss how Kronos is incorporating AI into our Workforce Dimensions platform.

Here’s one way to categorize artificial intelligence solutions

Genuine, rapid progress. Imperfect but improving. Fundamentally dubious. These are the three categories that Prof. Narayanan uses to describe AI approaches. It’s easy enough to look around and identify wildly successful AI ventures. Machine vision and image perception is one stand-out example of tremendous achievement. If you have watched the NFL this season, you might recall a Microsoft AI ad that demonstrates how it is able to assist researchers by identifying hidden snow leopards in video footage. Image classification is a problem it turns out neural networks are extremely competent at solving. This is genuine, rapid progress.

Here’s the rub: when a wide variety of technology all adopts the same tag of “AI”, the mystery and power of what AI can do is heightened. It becomes a challenge to identify solutions that are genuine and bring value, versus those that may do more harm than good. 

What should I watch out for?

Artificial Intelligence models are only as good as the data that train the models. Data are measured by their explanatory power—their power to predict a desired outcome . A retail store selling shovels can probably claim that a forecasted snowstorm in the area is a significant predictor of shovel sales. There is other data that is not very explanatory. That same store would probably tell you that news that trade talks in China have stalled will not materially affect snow shovel sales. The latter is important information for some markets, but not for a town hardware store selling shovels that same day.

Let’s look at an example. In the HCM world, imagine you design an AI model to algorithmically select and hire new employees. Your first step is providing the model with data of what a desirable candidate looks like. What might be a good data source for employees you want to work for your own company? One answer right in front of us is to feed it in the profile of our organization’s top performers! For this experiment, imagine we can even distill down our employees into data for our AI model to understand. And likewise, imagine we can measure potential candidates on these areas, and we let AI select new candidates. Even if our data is perfectly explanatory, and our model is well tuned, we overlooked one crucial detail: bias in our dataset.

An unchecked AI in this example will now hire people that look most representative of people already in the company. This might at first blush appear to be a positive, but at closer inspection the result is rather disturbing. A model like this would self-reinforce new hires to all be nearly identical on paper. Any existing biases in previous hiring will be perpetuated hire after hire. These biases can be as broad as preexisting cultural biases, or as specific as excluding employees that think or work in a unique and separate way from the existing employees. No company thrives forever without changing its business model.  Nor can an organization hire the same profile candidate ad infinitum.

Before I end this discussion, I need to give credit to the vendors working on this specific hiring software, which is a problem I have simplified quite a bit here. They acknowledge and are actively working to address the issue of bias. However, any vendor that makes no mention of bias or how they account for it is one I would be weary of.

 

What’s the takeaway?

Many of the areas where AI models are being discussed are still areas that humans are good at. In the near term I see AI growing as an aid to HR administrators. It is far from being their replacement. After all, Human Resources is in the very business of dealing with people. There is a lot that technology can do to support and assist these professionals but managing the employee lifecycle requires a human to make the key decisions.

What is Kronos doing with AI?

Workforce Dimensions incorporates AI throughout the solution. You can find our AI offerings by looking for the Aimee tag. Two implementations I will mention here are the machine learning volume forecaster and shift swap recommendations.

The volume forecaster is a next-generation volume prediction tool. This capability is what will predict how much volume an item or department will sell. This is valuable not only for purchase planning, but to volume of items sold is a main driver of how many employees should be scheduled to work! The accuracy and flexibility of this model is leaps and bounds better than its predecessor.

The other AI solution I’ll call out in this blog is our shift swap recommendation tool. When an employee wants to swap shifts, Kronos will make personal recommendations on which shifts that employee might prefer. This is the same idea as Amazon recommending to you a few options of spatula when they see you’ve added a mixing bowl to your shopping cart. The goal is to provide to the employee user the shifts they are most likely to want so they do not have to spend time searching for a suitable option.

I’m excited to move into the New Year with Kronos and get closer to more exciting Aimee and AI announcements at Kronos! 

Published: Friday, December 20, 2019