Wednesday, April 24, 2013

Workforce Science: The Difference is in the Data

The New York Times Blogs (Bits)
April 23, 2013
Companies are increasingly using data-driven testing and measurement in the hiring and evaluation of employees—a field called workforce science. The enthusiasm for worker measurement and testing is not new, but the ability to collect and mine so much data on worker behavior is. It could open the door to new insights into what makes workers productive, innovative and happy on the job.
A previous column about the rise of what is being called "workforce science" said lots of companies are embracing the trend, but anyone familiar with business history might reasonably ask, “What's really new here?”
Certainly, the current enthusiasm for worker measurement and trait testing has its echoes in the past. Frederick Winslow Taylor's time-and-motion studies of physical labor, like bricklaying and shoveling coal, became the "scientific management" of a century ago.
And for decades, major American corporations employed industrial psychologists and routinely gave job candidates personality and intelligence tests.
Companies pulled back from such statistical analysis of employees in the 1970s, amid questions about its effectiveness, worker resistance and a wave of anti-discrimination lawsuits. Companies apparently figured that if any of their test results showed women or minorities doing poorly, it might become evidence in court cases, said Peter Cappelli, director of the Center for Human Resources at the University of Pennsylvania's Wharton School.
Today, worker measurement and testing is enjoying a renaissance, powered by digital tools.
What is different now, said Mitchell Hoffman, an economist and postdoctoral researcher at the Yale School of Management, is the amount and detail of worker data being collected. In the past, he said, studies of worker behavior typically might have involved observing a few hundred people at most—the traditional approach in sociology or personnel economics.
But a new working paper, written by Hoffman and three other researchers, mines data from companies in three industries—telephone call centers, trucking and software—on a total of more than one million job applicants and more than 70,000 workers over several years.
The measurements can be quite detailed including call "handle" times and customer satisfaction surveys (call centers), miles driven per week and accidents (trucking), and patent applications and lines of code written (software).
Their subject is work force referrals, and the paper is titled, "The Value of Hiring Through Referrals."
Selecting new workers who are recommended by a company's current employees has long been seen as a way to increase the odds of hiring productive workers. It makes sense that the social networks of a company's workers would be a valuable resource to tap, and many companies pay their employees referral bonuses.
The researchers found that referred employees—across the three industries—were 25% more profitable than nonreferred workers. But the referral payoff comes entirely from recommendations from a company's best workers, whose productivity is above average.
"A recommendation from Joe Shmoe the dud is worse than hiring a nonreferred worker," Hoffman noted.
The paper suggests that companies might want to rethink across-the-board referral policies.
"The previous work on worker referrals has been mostly anecdotal and impressionistic," said Stephen Burks, an economist at the University of Minnesota, Morris, who was a co-author of the paper. "It hasn't been quantified in this way before, the way you can with these rich data sets."
But another co-author, Bo Cowgill, points to a challenge in workforce science, and for much of the emerging social science using Big Data. Cowgill, a doctoral student at the University of California, Berkeley, spent six years as a quantitative analyst at Google. So he has plenty of first-hand experience in sophisticated data handling.
The data in workforce science is observational data rather than data from experiments, which is the gold standard in science. What much of Big Data research lacks, Cowgill said, is the equivalent rigor of randomized clinical trials in drug-testing. That is, controlled experiments.
Observing how large numbers of people behave, Cowgill noted, can be extremely valuable, pointing to powerful correlations. But without controlled experiments, he added, you often do not get to the deeper understanding of the causes of observed behavior—understanding causation rather than merely identifying correlation.
"Some people feel that knowing correlations are enough," Cowgill said. "Not me, and most economists would agree."
But other economists say this kind of Big Data research is just getting under way—and already yielding significant results. "I wouldn't sell short being able to see the correlations," said Erik Brynjolfsson, an economist at the Massachusetts Institute of Technology's Sloan School of Management. "That is a big step in itself. And this is the way science works. You start with measurement and it progresses to experiment."

No comments:

Post a Comment