Algorithmic Fairness in Network Science

A NetSci 2025 Satellite

3 June 2025, 08:30 AM - 12:30 PM

Keynote Speakers

Organizer 1

Prof. Dr. Nitesh Chawla

Prof. Dr. Nitesh V. Chawla is the Frank M. Freimann Professor of Computer Science and Engineering at the University of Notre Dame, where he also serves as the Founding Director of the Lucy Family Institute for Data and Society. He holds joint appointments in the Department of Applied and Computational Mathematics and Statistics and the Department of IT, Analytics, and Operations. A Fellow of AAAS, AAAI, ACM, and IEEE, Chawla is a leading expert in artificial intelligence, data science, and network science. His research bridges foundational methods with interdisciplinary applications aimed at advancing the common good. He has received numerous honors, including the IEEE CIS Outstanding Early Career Award, IBM Faculty Awards, and multiple teaching and best paper awards, reflecting his impact in both research and education.

Title: Toward Responsible AI: Advancing Fairness, Explainability, and Scalable Interpretability

Abstract:The pursuit of Responsible Artificial Intelligence (AI) increasingly hinges on two foundational pillars: ensuring fairness and enabling efficient, meaningful explanations. While blackbox models offer superior predictive performance, their opaque decision-making can limit their adoption in domains where accountability, trust, and societal impact are critical. In this presentation, I will present our work that addresses the limitations of current explainability and fairness approaches by proposing frameworks that emphasize both transparency and scalability, without sacrificing model performance.

Speaker 2

Dr. Kristina Lerman

Kristina Lerman is a Senior Principal Scientist at the University of Southern California Information Sciences Institute and holds a joint appointment as a Research Professor in the USC Thomas Lord Department of Computer Science. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her work on modeling and understanding cognitive biases in social networks has been covered by the Washington Post, Wall Street Journal, and The Atlantic. She is a fellow of the AAAI.

Title: Cumulative Disadvantage: How Feedback Loops Amplify Inequalities in Science

Abstract: Recommendation algorithms have become powerful mediators of information access and visibility, shaping how people discover content, form connections, and make decisions. However, their interaction with social biases can create feedback loops that systematically disadvantage certain groups. Through a case study of academic citations, we examine how these mechanisms emerge and compound over time. Analyzing large-scale bibliometric data, we show that women scientists receive fewer citations than their male counterparts, a disparity not explained by minority status alone. Additionally, in some disciplines, recognition becomes increasingly concentrated among authors from prestigious institutions, reinforcing elitism and compounding institutional advantages. These disparities stem from multiple mechanisms, including homophily (favoring similar authors), preferential attachment (favoring highly cited or active researchers), group size, and field growth dynamics. Academic search engines that rank papers by citation counts risk amplifying these disparities, creating a self-reinforcing cycle where advantaged groups gain increasing visibility while others face growing barriers to recognition. Our findings underscore the need to consider algorithmic fairness within broader social systems and advocate for ranking approaches that counter, rather than reinforce, systemic biases in scientific evaluation.