Institut für Bildung, Arbeit und Gesellschaft
Permanent URI for this collectionhttps://hohpublica.uni-hohenheim.de/handle/123456789/28
Browse
Browsing Institut für Bildung, Arbeit und Gesellschaft by Sustainable Development Goals "16"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Publication Autonomous weapons: considering the rights and interests of soldiers(2025) Haiden, Michael; Richter, FlorianThe development of autonomous weapons systems (AWSs), which would make decisions on the battlefield without direct input from humans, has the potential to dramatically change the nature of war. Due to the revolutionary potential of these technologies, it is essential to discuss their moral implications. While the academic literature often highlights their morally problematic nature, with some proposing outright banning them, this paper highlights an important benefit of AWSs: protecting the lives, as well as the mental and physical health of soldiers. If militaries can avoid sending humans into dangerous situations or relieve drone operators from tasks that lead to lifelong trauma, this obviously appears morally desirable – especially in a world where many soldiers are still drafted against their will. Nonetheless, there are many arguments against AWSs. However, we show that although AWSs are potentially dangerous, criticisms apply equally to human soldiers and weapons steered by them. The combination of both claims makes a strong case against a ban on AWSs where it is possible. Instead, researchers should focus on mitigating their drawbacks and refining their benefits.Publication Does a smarter ChatGPT become more utilitarian?(2026) Pfeffer, Jürgen; Krügel, Sebastian; Uhl, Matthias; Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany; Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany; Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, GermanyHundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use.Publication Navigating the social dilemma of autonomous systems: normative and applied arguments(2025) Bodenschatz, AnjaAutonomous systems (ASs) become ubiquitous in society. For one specific ethical challenge, normative discussions are scarce: the social dilemma of autonomous systems (SDAS). This dilemma was assessed in empirical studies on autonomous vehicles (AVs). Many people generally agree to a utilitarian programming of ASs, but do not want to buy a machine that might sacrifice them deterministically. One possible way to mitigate the SDAS would be for ASs to randomize between options of action. This would bridge between a socially accepted program and the urge of potential AS users for some sense of self-protection. However, the normativity of randomization has not yet been evaluated for dilemmas between self-preservation and self-sacrifice for the “greater good” of saving several other lives. This paper closes this gap. It provides an overview of the most prominent normative and applied arguments for all three options of action in the dilemmas of interest: self-sacrifice, self-preservation, and randomization. As a prerequisite for inclusion in societal discussions on AS programming, it is ascertained that a normative argument can be elicited for each potential course of action in abstract thought experiments. The paper then progresses to discuss factors that may shift the normative claim between self-sacrifice, self-preservation, and randomization in the case of AV programming. The factors identified in this comparison are generalized into guiding dimensions for moral considerations along which all three options of action should be evaluated when programming ASs for dilemmas involving their users.
