This research aims to systematically explore and investigate the pervasiveness of the AI bias impacts by collecting, analysing, and organizing these impacts into suitable categories for effective mitigation.
Authors
Chandni Bansal, O.P. Jindal Global University, Sonipat, Haryana, India.
Krishan K. Pandey, Professor, Jindal Global Business School, O.P. Jindal Global University, Sonipat, Haryana, India.
Rajni Goel, Professor, Information Systems and Supply Chain Management Department, School of Business, Howard University, Washington D.C., USA.
Anuj Sharma, Professor, Jindal Global Business School, O. P. Jindal Global University, Sonipat, Haryana, India.
Srinivas Jangirala, Associate Professor, Jindal Global Business School, O. P. Jindal Global University, Sonipat, Haryana, India.
Summary
Artificial Intelligence (AI) biases are becoming prominent today with the widespread and extensive use of AI for autonomous decision-making systems. Bias in AI can exist in many ways- from age discrimination and recruiting inequality to racial prejudices and gender differentiation. These biases severely impact various levels, leading to discrimination and faulty decision-making. The research aims to systematically explore and investigate the pervasiveness of the AI bias impacts by collecting, analysing, and organizing these impacts into suitable categories for effective mitigation. An in-depth analysis is done using a systematic literature review process to gather and outline the variety of impacts discussed in the literature.
Through our holistic qualitative analysis, the research reveals patterns in the types of bias impacts that can be categorized, from which a classification model is developed that places the impacts in 4 primary domains: fundamental rights, individuals and societies, the financial sector, and businesses and organizations. By identifying the impacts caused by AI bias and categorizing them using a systematic approach, a set of specific targeted mitigation strategies relative to the impact category can be identified and leveraged to assist in managing the risks of AI bias impacts. This study will benefit practitioners and automation engineers on a global scale who aim to develop transparent and inclusive AI systems.
Published in: Issues in Information Systems
To read the full article, please click here.