Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

SOE: Fachverband Physik sozio-ökonomischer Systeme

SOE 10: Focus Session: Large Language Models, Social Dynamics, and Assessment of Complex Systems

SOE 10.5: Vortrag

Donnerstag, 20. März 2025, 16:15–16:30, H45

Computational modeling of LLM powered personalized recommendations — •Alessandro Bellina1,2,4, Giordano De Marzo1,3,4, David Garcia3, and Vittorio Loreto1,2,41Centro Ricerche Enrico Fermi, Piazza del Viminale, 1, I-00184 Rome, Italy — 2Sony Computer Science Laboratories Rome, Joint Initiative CREF-SONY, Piazza del Viminale, 1, 00184, Rome, Italy — 3University of Konstanz, Universitaetstrasse 10, 78457 Konstanz, Germany — 4Dipartimento di Fisica Università La Sapienza, P.le A. Moro, 2, I-00185 Rome, Italy

Large language models (LLMs) are transforming recommendation systems by tailoring content to user preferences, but they also risk reinforcing filter bubbles and driving polarization. This study investigates the dual impact of LLM-based recommendations, analyzing their inherent biases and exploring how prompt engineering can mitigate these effects. By combining synthetic simulations with real-world Twitter data, we assess how LLMs influence user behavior and the extent to which recommendations amplify or reduce polarization. Preliminary results suggest that prompt engineering enables greater control over recommendations, fostering diversity and creativity. This research provides insights into the risks and opportunities of LLM-powered systems, offering a framework for designing more inclusive and balanced recommendation algorithms.

Keywords: LLMs; Recommendations; Personalization; Filter Bubble

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2025 > Regensburg