Employee in the big data era: Will you let robots determine your future at work?

Photo: Littal Shemer Haim ©

This article was migrated to Littalics.com

 

 

Employee in the big data era: Will you let robots determine your future at work?

(A version of this article was published in TLNT magazine)

Think about data that you share at work, in the most personal sense. You share with your employer, and sometimes with potential employers, so many aspects of your life: details about your professional path, your personal status, health care, social-economics, legal and geographical background. You also agree to share information about what you do in different times and places, who you meet, what information you consume, and so on. Moreover, you leave your digital footprints on the web, social networks, and different apps, where data reveals to employers a lot about you. Did you ever consider how data might affect you at work? How does your employer actually use the data about you, and how technology enables it? What is allowed to do with your data, and what is considered crossing a red line, in terms of ethics and regulations?

AI (Artificial Intelligence) and ML (Machine learning) are two buzzwords that dominate the HR tech world today. We don’t know yet if there is a bubble in this field or rather a huge influence on management practices. Nevertheless, the common opinion among professionals is that managers will make better decisions, more informed decisions, related to workforce, by using predictive algorithms that, for example, fit candidates in jobs or let employers know who is at flight risk. There are tons of discussions about this subject, but mainly from the organization’s point of view. What I’d like to do now, for a change, is to take the employee perspective.

 

Should employees worry?

If you tried to land a job lately, perhaps you had a video interview (e.g., by HireVue), or you were asked to play some mobile games (e.g., by Knack). These technologies, which probably offer you a nice experience as a candidate, actually enable organizations to predict your performance in certain roles, basically by pre-exploring reactions of high and low performers in those exact roles. As a candidate, you’ll probably consent to participate in those practices, even though you don’t know exactly what data these machines collect about you, and what is the secret predictive model they use backstage.

I’m not saying that predictive models are bad. On the contrary, I believe that in general, a machine that fits the right person to the right job and does so better than a human whose perceptions may be biased, is actually positive, not only for organizations but also for employees, since they may have a better chance to thrive in the right roles. However, anyone who has some general knowledge about ML, can point to confusion matrix and demonstrate that algorithms are not perfect, or more precisely, how much imperfect they are.

Why predictive algorithms are not perfect? There are many technical and statistical reasons, but the one that concerns me, in this context, is the possibility that human biases affect seemingly unbiased machines. The promise of ML and AI was that the more information we feed these sophisticated computer algorithms, the better they perform. Unfortunately, when the input data reflects the history of an unequal workplace, we are, in effect, asking a robot to learn our own biases. Garbage in, garbage out, right?

Such unfortunate effects can easily occur in the workplace. For instance, if an analyst explores people who were promoted in the organization for the last decade and decides to use their data to predict high performance, it might result in a model that exclude minorities from predictions about high performance, since maybe minorities were rarely promoted in the past due to social biases or discrimination. This example may be extreme, but it can underline many other subtle possible occurrences.

 

Who will defend employees?

Defense (and self-defense) starts with awareness. Indeed, the awareness of data protection and privacy is increasing, and influencing society in general, particularly in regulation. Employee rights are broadened these days in the context of workforce data, although not evenly it each corner of the world. In the EU, a new privacy regulation, the General Data Protection Regulation (GDPR) was published lately (and will be enforceable starting May 25th, 2018). It has serious implications for any employer who processes its employees’ and potential employees’ data, whether it is data regarding work environment or internet behavior. Among many issues, the GDPR offers employees additional rights to reinforce control over their personal data, e.g., extended access and rights to be informed about data usage, data transferring, and period of storage. The new regulation is currently covered by legal experts, and anyone who analyzes employee data will soon start to consult legal departments regarding activities that did not require consultation in the past. In Europe, a new organizational stakeholder emerges – a Data Protection Officer (DPO) – and will be involved in analytics projects.

However, in my opinion, the compliance with the GDPR is only a starting point. It will surely force awareness of HR analytics team to privacy issues. But although it aims to protect privacy, I believe it will also influence employees’ behavior, and HR analytics practitioners will have to respond: When people start exercising their rights and request access to their data, People Analysts will be ready in advance to give them comprehensive information about their data usage. When employees start asking to correct or erase their data, employers will request more transparency and security from HR software providers. Organizations will ensure that they process only the personal data that is necessary for the specific purpose they wish to accomplish, and therefore, they’ll need long-term planning and more serious considerations. This will move the field of People Analytics forward. The implications for employees and candidates: Transparency! But not only…

 

Beyond transparency

I believe that eventually, even if it will take a few years, the People Analyst role will include more components of procurement. Analysts will make less programming on their own and be experts in HR tech and analytics solution. They will learn, in the sake of regulations and ethics, to ask vendors hard questions and be more critique about model accuracy and data privacy, and therefore, they’ll contribute not only to a culture of a data-driven organization but also to safety work environment regarding employee data. Employees and candidates, for their part, will judge employers, in addition to Employee Experience perceptions, by employer ethics in data management, and when feeling secure, they’ll be more receptive and enthusiastic to participate and cooperate with AI and ML to influence their career path.

 

References:
Kevin Markham, “Simple guide to confusion matrix terminology“, dataschool.io
Stephen Buranyi, “Rise of the racist robots – how AI is learning all our worst impulses“, theguardian.com
Arnold Birkhoff, “9 Ways the GDPR will Impact HR Data & Analytics“, analyticsinhr.com

 


 

 

About the author:

Littal Shemer Haim brings Data Science into HR activities, to guide organizations to base decision-making about people on data. Her vast experience in applied research, keen usage of statistical modeling, constant exposure to new technologies, and genuine interest in people’s lives, all led her to focus nowadays on HR Data Strategy, People Analytics, and Organizational Research.


 

8 Replies to "Employee in the big data era: Will you let robots determine your future at work?"

  • comment-avatar
    Littal Shemer Haim
    October 20, 2017 (9:04 pm)

    “For any People Analytics work to be sustainable (and thus maximizing the benefit over time) it needs to benefit all stakeholders”, says Andrew Marritt, in his thorough article “People Analytics, what’s in it for the Employees?”. The People Analyst needs high quality data, which he can’t get without employees’ cooperation. “To do this you have to ensure they can see why it’s in their own personal interest”, Merritt explains.
    Marritt concludes that “As people analytics evolved, we’ve matured from solving problems which were of most interest to HR teams to addressing issues that met business objectives. The next phase will be about providing tools that enable individuals to meet their objectives, even if their objectives aren’t explicitly the same as the organizations.” I agree with Marritt’s opinion, and believe that the maturity of this field, which Merritt describes, goes hand in hand with the latest discussions about ethics and regulations.

  • comment-avatar
    Littal Shemer Haim
    October 26, 2017 (4:16 pm)

    Check this:
    http://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion
    This is the most scary article I’ve read for years, and I do read a lot.
    In a global world, cultures and ethics are like communicating vessels, so figure it out yourself…
    The GDPR contradicts a potion of the descriptions in this fascinating article.
    Nevertheless, scary…

  • comment-avatar
    Littal Shemer Haim
    November 5, 2017 (8:40 pm)

    Here are 5 ways HR can lead the way with GDPR , according to Barry Stanton, a partner at law firm Boyes Turner and head of the employment group:
    1. HR teams are used to handling data and data requests
    2. Robust policies and staff training are HR domain
    3. Bring risk management expertise to the fore
    4. HR can build in agility and resilience
    5. HR can enhance employees’ skills and capabilities

  • comment-avatar
    JT
    November 10, 2017 (8:43 pm)

    Hi Littal, I saw you tweeting about ai and I thought I’d check out your website. I really like it. Looks like You have come a long way!

  • comment-avatar
    Littal Shemer Haim
    November 16, 2017 (7:21 pm)

    Here are some excellent research references that emphasize why you can’t trust AI to make unbiased hiring decisions.

  • comment-avatar
    Littal Shemer Haim
    March 17, 2018 (12:11 pm)

    AI is only as effective as the data it is trained on. Machine intelligence is accurately reflecting the prejudices of the people it drew its training from. Black boxes must be opened. When it comes to services for people, if your supplier can’t explain an algorithm’s decision, you shouldn’t use it. Great examples and solutions in the article: Now Is The Time To Act To End Bias In AI

  • comment-avatar
    Littal Shemer Haim
    October 31, 2018 (1:36 pm)

    If you based your algorithm on old data that reflect old gender bias, what would you expect? Read about gender bias in recruitment at Amazon: https://mashable.com/article/amazon-sexist-recruiting-algorithm-gender-bias-ai/

  • Having Trouble with Employee Turnover? People Analytics Can Help Your HR – and Business - bob
    November 6, 2019 (4:32 pm)

    […] must make sure that the quality of data remains high. They also need to protect employee’s data privacy and stay up to date on cybersecurity regulations, particularly in the EU. Realistically, deploying […]