We all love data. Data is essential to making sound decisions. Analyzing data provides the capacity to anticipate and plan for future events. The question that we need to ask is, what is the ethics of using all these data points?
According to an ADP article, in the past 20 years, our ability to collect, store, and process data has dramatically increased. There are exciting new tools that can help us automate processes, learn things we could not see before, recognize patterns, and predict what is likely to happen. Since our capacity to do new things has developed quickly, tech’s focus has been primarily on what we can do. Today, organizations are starting to ask what is the right thing to do.
Knowing what the right thing is, is the holy grail of ethics, and it is elusive.
In a recent interview, Shannon Vallor, the Regis, and Dianne McKenna, Professor in the Department of Philosophy at Santa Clara University, stated, “I certainly think that the aspect of data science that focuses on predictive insights raises some special questions. The ability to act on inferences about the future that are produced by a system that doesn’t understand any of the data that it’s working with that doesn’t understand the world the data represents, and that doesn’t have any relationship of care with the persons who generated the data is very sensitive.”
In January 2019, IBM Research published a blog post about a new dataset that the company had just released. It began with a question: “Have you ever been treated unfairly?” It then added, “most people generally agree that a fairer world is a better world…. That’s why we are harnessing the power of science to create AI systems that are more fair and accurate.” The blog explained: “Today, IBM Research is releasing a new large and diverse dataset called Diversity in Faces (DiF) to advance the study of fairness and accuracy in facial recognition technology.”
Face recognition may not be on the top of your list of concerns right now. However, how would your organization react to an employee who was identified by this technology by a government agency or if you were asked to provide information to an agency? It is a complex question.
While these are broad data and data analytics issues, there are local ones.
Employees are increasingly monitored through integrated, data analytic-driven continuous monitoring systems, that analyze a wealth of data concerning their behaviors and actions. While the use of these active monitoring systems has been advocated for improved performance measurement, increased productivity, and reduced costs, the discussion has generally ignored the ethical implications of such monitoring and the impact on employees’ morale and views of the organization. A study in the Journal of Information Systems titled “Potential Employees’ Ethical Perceptions of Active Monitoring: The Dark Side of Data Analytics” results suggest the presence of active monitoring systems significantly reduces potential employees’ perceptions of an organization’s ethics and their likelihood to accept or be satisfied with a position at the company.
A little closer to home. The recent onslaught of fraudulent unemployment claims is alarming. Identity theft is a federal crime punishable by fines, imprisonment, or both. Federal criminal law similarly prohibits other related crimes, including identification fraud, credit card fraud, computer fraud, mail fraud, and mail theft. More and more states are passing laws creating a crime of identity theft. Some states like Arizona, Colorado, New Mexico, Utah, and Wyoming have such laws. Where a state does not have specific laws on identity theft, it may prosecute those crimes using laws prohibiting forgery, theft, racketeering, fraud, and other offenses.
Another source of employee data is health information. Some organizations have wellness program arrangements with third-party vendors. These vendors provide a variety of services based on employees volunteering (or not) information about their lifestyle and receive advice and even additional benefits for doing so. Systems are in place to protect this type of information/data. The point is who owns your data is almost inconsequential compared to who processes it.
To protect employee data, employers need to be diligent. For Colorado employers, the Colorado Consumer Data Privacy Law sets forth several general requirements applicable to all covered entities:
- a data disposal policy;
- an information security policy; and
- notification of security breaches concerning personal information.
The ethics of using data goes beyond protecting it. Organizations need to begin to think about creating principles for how they and their vendors use employee data. An ADP article titled, “AI and Data Ethics: 5 Principles to Consider” by Jack Berkowitz suggests basing an organization’s data ethics on these elements: transparency, fairness, accuracy, privacy, and accountability. This list is roughly like the seven General Data Protection Regulation (GDPR) data protection principles of lawfulness, limitation, minimization, accuracy, storage limitation, integrity, and accountability.
The answers to ethical questions should begin with good questions. For example, who are the stakeholders involved—the people whose interests were directly or indirectly impacted by the creation and release of this database? How might this database’s development and deployment be evaluated through the ethical lenses of rights, justice/fairness, utilitarianism, common good, and virtue ethics?
As the famous Spider-Man quote goes, “With great power comes great responsibility.” There is a wealth of insights in the data organizations collect on employees. Organizations need to be responsible stewards of that information. Employers Council can help; give us a call.