Algorithmic Fairness in Network Science

A NetSci 2025 Satellite Workshop

The Algorithmic Fairness in Network Science (FairNetSci) satellite explores the biases and disparities within networks and their implications on algorithmic outcomes. Network inequality refers to the structural biases, perception disparities, and persistent inequalities stemming from connection patterns among agents in a network. These biases shape individuals' social views, behavioural decisions, and influence, highlighting the real-world impact of network structures.

Algorithms designed without addressing such biases risk producing unfair outcomes, particularly for minority groups. For example, link prediction algorithms may fail to accurately predict connections for smaller or less connected groups due to structural biases. Furthermore, recommendation systems can inadvertently amplify existing inequalities through their learning processes. However, with thoughtful design, algorithms can mitigate these biases and promote fair outcomes for all individuals and groups, irrespective of their size or type.

This satellite seeks to unite experts from diverse fields, including computer science, data science, social science, mathematics, and network science, to explore and address these challenges collaboratively. The satellite will include invited and contributed lightning talks and poster presentations.

This satellite event also allows junior researchers to showcase their work, including graduate students, postdocs, and young professors. A Best Student Paper Award will be presented to the most outstanding submission led by a graduate student.