Education & Training

Negative Impact of ChatGPT in Research

Negative Impact of ChatGPT in Research
Image Source – Investing News / Shutterstock

One of the main concerns about ChatGPT when it is used in research is its potential to perpetuate biases.

Author

Tripti Bhushan, Lecturer, Jindal Global Law School, O.P. Jindal University, Sonipat, Haryana, India.

Summary

ChatGPT, a large language model trained by OpenAI, has revolutionised the way we interact with computers. It can understand natural language and generate human-like responses, making it a powerful tool for research in a variety of fields. However, there are concerns about the negative impact that ChatGPT could have on research.

Starting from one of the main concerns about ChatGPT when it is used in research is its potential to perpetuate biases. We know that the model is trained on vast amounts of data, which includes text from the internet. This means that it can pick up on biases that are present in society and reproduce them in its responses. For example, if the data used to train ChatGPT contains a disproportionate amount of text from certain demographic groups, the model may be more likely to generate responses that reflect those biases.

This is a particular concern in fields such as the social sciences, where researchers use natural language processing techniques to analyse data. If ChatGPT is used to generate responses to survey questions or open-ended prompts, it could inadvertently perpetuate biases that exist in the data. This could lead to inaccurate conclusions being drawn from the data and could potentially harm the communities that are affected by those biases.

Published in: Economic & Political Weekly

To read the full letter, please click here.