Sunday 02 February 2025
The use of artificial intelligence (AI) in higher education has become increasingly prevalent, with many universities relying on machine learning algorithms to make decisions about student admissions. However, a recent study has highlighted the potential biases that these AI systems can introduce, and how they can affect the demographics of the student body.
Researchers analyzed data from a large urban research university that had implemented a test-optional policy, allowing students to choose whether or not to submit standardized test scores as part of their application. The team used machine learning algorithms to predict which students would be directly admitted into the university’s School of Science, and found that the models created biases with respect to gender, race, and first-generation college students.
The study showed that under a test-optional policy, more women, non-white students, and first-generation college students were admitted than under a traditional test-required policy. However, the AI models also introduced new biases, predicting that white students would be incorrectly admitted more often than non-white students, and that first-generation college students would be incorrectly rejected more often than non-first-generation students.
These biases are not necessarily intentional, but rather a result of the algorithms being trained on datasets that reflect social biases present in society. The researchers noted that even though the overall accuracy of the models was high, these biases could have significant implications for student diversity and inclusion.
The study’s findings highlight the need for greater scrutiny of AI systems used in higher education admissions. As universities increasingly rely on machine learning algorithms to make decisions about students, it is essential that they are designed and trained to minimize bias and promote fairness.
In addition, the researchers emphasized the importance of transparency and accountability in the development and deployment of these AI systems. Universities must be transparent about how their algorithms work and what data they use to train them, and they must also be held accountable for any biases or errors that arise from these systems.
Ultimately, the use of AI in higher education admissions presents both opportunities and challenges. While machine learning algorithms can help streamline the admissions process and increase efficiency, they also require careful consideration and oversight to ensure that they do not perpetuate existing social inequalities. As universities continue to rely on AI in their decision-making processes, it is essential that they prioritize fairness, transparency, and accountability.
Cite this article: “AI-Generated Biases in Higher Education Admissions: Implications for Student Diversity and Inclusion”, The Science Archive, 2025.
Artificial Intelligence, Higher Education, Machine Learning, Student Admissions, Bias, Demographics, Research University, Test-Optional Policy, Algorithmic Fairness, Transparency, Accountability.







