Artificial intelligence (AI) ethics

This dataset contains simulated records for 3,000 students, generated for the purpose of evaluating fairness in predicted grading models. The dataset includes decile rankings based on historical performance, predicted grades, and demographic attributes such as socioeconomic status, school type, gender, and ethnicity. The data was created using controlled randomization techniques and includes noise to reflect real-world prediction uncertainty. While entirely synthetic, the dataset is designed to mimic key structural patterns relevant to algorithmic fairness and educational inequality.

Categories:
1 Views

This dataset contains simulated records for 3,000 students, generated for the purpose of evaluating fairness in predicted grading models. The dataset includes decile rankings based on historical performance, predicted grades, and demographic attributes such as socioeconomic status, school type, gender, and ethnicity. The data was created using controlled randomization techniques and includes noise to reflect real-world prediction uncertainty. While entirely synthetic, the dataset is designed to mimic key structural patterns relevant to algorithmic fairness and educational inequality.

Categories:
4 Views

Text-to-image models, like Midjourney and DALL-E, have been shown to reinforce harmful biases, often perpetuating outdated and discriminatory stereotypes. In this study, we delve into a particularly insidious bias largely overlooked in generative image research: Brilliance Bias. By age six, many children begin to internalize the damaging notion that intellectual brilliance is a male trait—a belief that persists into adulthood. Our findings demonstrate that popular image AI models possess this bias, further entrenching the misguided notion that exceptional intelligence is inherently male.

Categories:
90 Views

Replication data for the fsQCA model in: "Why do companies employ prohibited unethical artificial intelligence practices?"

Categories:
133 Views