Skip to the content.

The AI for Society Lab is a research group in the Department of Computer Science and Biomedical Engineering at the Institute of Human-Centred Computing at Graz University of Technology, Austria. We are committed to producing high-quality, responsible and reproducible research results.

The AI for Society Lab at the Institute of Human-Centred Computing at Graz University of Technology, led by Univ. Prof. Dr. techn. Elisabeth Lex, conducts research on AI systems designed with a focus on societal benefit. The lab’s work centers on human-centered AI, inclusive technologies, and responsible algorithmic decision-making, addressing the interplay between ethical, social, and technical aspects of AI applications in real-world contexts.

Research at the lab investigates adaptive, explainable, and personalized AI methods that incorporate cognitive, behavioral, and social dimensions. The lab brings together perspectives from computer science and human-computer interaction to build intelligent systems with a particular focus on trustworthiness and alignment with human values.

Key research areas include user and behavior modeling, recommender systems informed by cognitive and psychological theories, trustworthy information access, and natural language processing. Particular attention is given to inclusive and participatory design practices that support accessibility and equity in AI development and deployment. Applications of this research spans multiple domains such as education, healthcare, assistive technology, industrial environments, and online communication.

Several empirical studies conducted by the lab have contributed to advancing knowledge in the field. These include work on fairness and transparency in recommender and information retrieval systems [1], analysis of temporal changes in the linguistic complexity of song lyrics [2], investigation of opinion polarization in public discourse related to COVID-19 prevention measures [3], and the identification of fairness-related vulnerabilities in graph neural networks under adversarial conditions [4]. A comprehensive review conducted by the lab further demonstrated the value of integrating psychological theories into the design of recommender systems [5].

Ongoing research at the lab explores several directions aligned with its focus on responsible and human-centered AI. As part of the FWF Cluster of Excellence Bilateral AI, current work investigates neurosymbolic AI methods for personalization, the development of cognition-informed user modeling techniques, and strategies for diversifying news recommendations to broaden user perspectives. Additionally, within the FFG-funded HybridAir project, the lab is engaged in the design and implementation of domain-specific recommender systems to support decision-making in complex industrial environments, leveraging LLMs and graph-based representations. Another current line of research addresses algorithmic bias related to disability, with a particular focus on improving wheelchair-accessible recommendations through fairness-aware methods.

In addition to its research activities, the lab contributes regularly co-organizes scientific events, such as the Recommender Systems for Social Good workshop at ACM RecSys’25, or the BIAS workshop at ACM SIGIR’25.

Selected References:

[1] Schedl, M., Anelli, V. W., & Lex, E. (2025). Technical and Regulatory Perspectives on Information Retrieval and Recommender Systems: Fairness, Transparency, and Privacy. Springer Nature Switzerland, Imprint: Springer.

[2] Parada-Cabaleiro, E., Mayerl, M., Brandl, S., Skowron, M., Schedl, M., Lex, E., & Zangerle, E. (2024). Song lyrics have become simpler and more repetitive over the last five decades. Scientific Reports, 14(1), 5531.

[3] Reiter-Haas, M., Klösch, B., Hadler, M., & Lex, E. (2023). Polarization of opinions on COVID-19 measures: integrating twitter and survey data. Social Science Computer Review, 41(5), 1811-1835.

[4] Hussain, H., Cao, M., Sikdar, S., Helic, D., Lex, E., Strohmaier, M., & Kern, R. (2022). Adversarial inter-group link injection degrades the fairness of graph neural networks. In 2022 IEEE International Conference on Data Mining (ICDM) (pp. 975-980). IEEE.

[5] Lex, E., Kowald, D., Seitlinger, P., Tran, T. N. T., Felfernig, A., & Schedl, M. (2021). Psychology-informed recommender systems. Foundations and Trends® in Information Retrieval, 15(2), 134-242.

Recent Tutorials

Current Projects

Previous Projects

Recent Publications

Selected GitHub Repositories:

Team

Head: Univ. Prof. Dr. Elisabeth Lex

PostDocs:

PhD students:

Master’s students:

Bachelor’s students:

Alumni:

Teaching

Contact

Social media: @socialcomplab
TU Graz: HCC