The rise in Generative Artificial Intelligence technology through applications like ChatGPT has increased awareness about the presence of biases within machine learning models themselves. The data that Large Language Models (LLMs) are trained upon contain inherent biases as they reflect societal biases and stereotypes. This can lead to the further propagation of biases. In this paper, I establish a baseline measurement of the gender and racial bias within the domains of crime and employment across major LLMs using “ground truth” data published by the U.S.

Dataset Files

You must be an IEEE Dataport Subscriber to access these files. Subscribe now or login.

[1] Hima Thota, "Prompt Datasets to Evaluate LLM Safety", IEEE Dataport, 2024. [Online]. Available: http://dx.doi.org/10.21227/gjej-zp03. Accessed: Jul. 22, 2024.
@data{gjej-zp03-24,
doi = {10.21227/gjej-zp03},
url = {http://dx.doi.org/10.21227/gjej-zp03},
author = {Hima Thota },
publisher = {IEEE Dataport},
title = {Prompt Datasets to Evaluate LLM Safety},
year = {2024} }
TY - DATA
T1 - Prompt Datasets to Evaluate LLM Safety
AU - Hima Thota
PY - 2024
PB - IEEE Dataport
UR - 10.21227/gjej-zp03
ER -
Hima Thota. (2024). Prompt Datasets to Evaluate LLM Safety. IEEE Dataport. http://dx.doi.org/10.21227/gjej-zp03
Hima Thota, 2024. Prompt Datasets to Evaluate LLM Safety. Available at: http://dx.doi.org/10.21227/gjej-zp03.
Hima Thota. (2024). "Prompt Datasets to Evaluate LLM Safety." Web.
1. Hima Thota. Prompt Datasets to Evaluate LLM Safety [Internet]. IEEE Dataport; 2024. Available from : http://dx.doi.org/10.21227/gjej-zp03
Hima Thota. "Prompt Datasets to Evaluate LLM Safety." doi: 10.21227/gjej-zp03