+Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance. In this work ([Chalkidis and Brandl, 2024](https://arxiv.org/abs/2403.13592)), we expand this line of research beyond the two-party system in the US and audit *Llama Chat* ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) on political debates from the European Parliament in various settings to analyze the model's political knowledge and its ability to reason in context. We adapt, i.e., further fine-tune, *Llama Chat* on parliamentary debates of individual euro parties to reevaluate its political leaning based on the *EU-AND-I* questionnaire ([Michel et al., 2019](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3446677)). *Llama Chat* shows extensive prior knowledge of party positions and is capable of reasoning in context. The adapted, party-specific, models are substantially re-aligned towards respective positions which we see as a starting point for using chat-based LLMs as data-driven conversational engines to assist research in political science.
0 commit comments