Institut für Bildung, Arbeit und Gesellschaft
Permanent URI for this collectionhttps://hohpublica.uni-hohenheim.de/handle/123456789/28
Browse
Browsing Institut für Bildung, Arbeit und Gesellschaft by Sustainable Development Goals "9"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Publication Autonomous weapons: considering the rights and interests of soldiers(2025) Haiden, Michael; Richter, FlorianThe development of autonomous weapons systems (AWSs), which would make decisions on the battlefield without direct input from humans, has the potential to dramatically change the nature of war. Due to the revolutionary potential of these technologies, it is essential to discuss their moral implications. While the academic literature often highlights their morally problematic nature, with some proposing outright banning them, this paper highlights an important benefit of AWSs: protecting the lives, as well as the mental and physical health of soldiers. If militaries can avoid sending humans into dangerous situations or relieve drone operators from tasks that lead to lifelong trauma, this obviously appears morally desirable – especially in a world where many soldiers are still drafted against their will. Nonetheless, there are many arguments against AWSs. However, we show that although AWSs are potentially dangerous, criticisms apply equally to human soldiers and weapons steered by them. The combination of both claims makes a strong case against a ban on AWSs where it is possible. Instead, researchers should focus on mitigating their drawbacks and refining their benefits.Publication Does a smarter ChatGPT become more utilitarian?(2026) Pfeffer, Jürgen; Krügel, Sebastian; Uhl, Matthias; Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany; Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany; Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, GermanyHundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use.Publication Educational ideals affect AI acceptance in learning environments(2026) Richter, Florian; Uhl, Matthias; Richter, Florian; Catholic University of Eichstätt-Ingolstadt, Eichstätt, Germany; Uhl, Matthias; University of Hohenheim, Stuttgart, GermanyAI is increasingly used in learning environments to monitor, test, and educate students and allow them to take more individualized learning paths. The success of AI in education will, however, require the acceptance of this technology by university management, faculty, and students. This acceptance will depend on the added value that stakeholders ascribe to this technology. In two empirical studies, we investigate the hitherto neglected question of which impact educational ideals have on the acceptance of AI in learning environments. We find clear evidence for our study participants’ conviction that humanistic educational ideals are considered less suitable for implementing AI in education than compentence-based ideals. This implies that research on the influence of teaching and learning philosophies could be an enlightening component of a comprehensive research program on human-AI interaction in educational contexts.Publication Guidelines for using financial incentives in software-engineering experimentation(2024) Krüger, Jacob; Çalıklı, Gül; Bershadskyy, Dmitri; Otto, Siegmar; Zabel, Sarah; Heyer, RobertContext: Empirical studies with human participants (e.g., controlled experiments) are established methods in Software Engineering (SE) research to understand developers’ activities or the pros and cons of a technique, tool, or practice. Various guidelines and recommendations on designing and conducting different types of empirical studies in SE exist. However, the use of financial incentives (i.e., paying participants to compensate for their effort and improve the validity of a study) is rarely mentioned Objective: In this article, we analyze and discuss the use of financial incentives for human-oriented SE experimentation to derive corresponding guidelines and recommendations for researchers. Specifically, we propose how to extend the current state-of-the-art and provide a better understanding of when and how to incentivize. Method: We captured the state-of-the-art in SE by performing a Systematic Literature Review (SLR) involving 105 publications from six conferences and five journals published in 2020 and 2021. Then, we conducted an interdisciplinary analysis based on guidelines from experimental economics and behavioral psychology, two disciplines that research and use financial incentives. Results: Our results show that financial incentives are sparsely used in SE experimentation, mostly as completion fees. Especially performance-based and task-related financial incentives (i.e., payoff functions) are not used, even though we identified studies for which the validity may benefit from tailored payoff functions. To tackle this issue, we contribute an overview of how experiments in SE may benefit from financial incentivisation, a guideline for deciding on their use, and 11 recommendations on how to design them. Conclusions: We hope that our contributions get incorporated into standards (e.g., the ACM SIGSOFT Empirical Standards), helping researchers understand whether the use of financial incentives is useful for their experiments and how to define a suitable incentivisation strategy.Publication Motivational framing strategies in health care information security training: randomized controlled trial(2025) Keller, Thomas; Warwas, Julia Isabella; Klein, Julia; Henkenjohann, Richard; Trenz, Manuel; Thanh-Nam Trang, SimonBackground: Information security is a critical challenge in the digital age, especially for hospitals, which are prime targets for cyberattacks due to the monetary worth of sensitive medical data. Given the distinctive security risks faced by health care professionals, tailored Security Education, Training, and Awareness (SETA) programs are needed to increase both their ability and willingness to integrate security practices into their workflows. Objective: This study investigates the effectiveness of a video-based security training, which was customized for hospital settings and enriched with motivational framing strategies to build information security skills among health care professionals. The training stands out from conventional interventions in this context, particularly by incorporating a dual-motive model to differentiate between self- and other-oriented goals as stimuli for skill acquisition. The appeal to the professional values of responsible health care work, whether absent or present, facilitates a nuanced examination of differential framing effects on training outcomes. Methods: A randomized controlled trial was conducted with 130 health care professionals from 3 German university hospitals. Participants within 2 intervention groups received either a self-oriented framing (focused on personal data protection) or an other-oriented framing (focused on patient data protection) at the beginning of a security training video. A control group watched the same video without any framing. Skill assessments using situational judgment tests before and after the training served to evaluate skill growth in all 3 groups. Results: Members of the other-oriented intervention group, who were motivated to protect patients, exhibited the highest increase in security skills (ΔM=+1.13, 95% CI 0.82-1.45), outperforming both the self-oriented intervention group (ΔM=+0.55, 95% CI 0.24-0.86; P=.04) and the control group (ΔM=+0.40, 95% CI 0.10-0.70; P=.004). Conversely, the self-oriented framing of the training content, which placed emphasis on personal privacy, did not yield significantly greater improvements in security skills over the control group (mean difference=+0.15, 95% CI –0.69 to 0.38; P>.99). Further exploratory analyses suggest that the other-oriented framing was particularly impactful among participants who often interact with patients personally, indicating that a higher frequency of direct patient contact may increase receptiveness to this framing strategy. Conclusions: This study underscores the importance of aligning SETA programs with the professional values of target groups, in addition to adapting these programs to specific contexts of professional action. In the investigated hospital setting, a motivational framing that resonates with health care professionals’ sense of responsibility for patient safety has proven to be effective in promoting skill growth. The findings offer a pragmatic pathway with a theoretical foundation for implementing beneficial motivational framing strategies in SETA programs within the health care sector.Publication Navigating the social dilemma of autonomous systems: normative and applied arguments(2025) Bodenschatz, AnjaAutonomous systems (ASs) become ubiquitous in society. For one specific ethical challenge, normative discussions are scarce: the social dilemma of autonomous systems (SDAS). This dilemma was assessed in empirical studies on autonomous vehicles (AVs). Many people generally agree to a utilitarian programming of ASs, but do not want to buy a machine that might sacrifice them deterministically. One possible way to mitigate the SDAS would be for ASs to randomize between options of action. This would bridge between a socially accepted program and the urge of potential AS users for some sense of self-protection. However, the normativity of randomization has not yet been evaluated for dilemmas between self-preservation and self-sacrifice for the “greater good” of saving several other lives. This paper closes this gap. It provides an overview of the most prominent normative and applied arguments for all three options of action in the dilemmas of interest: self-sacrifice, self-preservation, and randomization. As a prerequisite for inclusion in societal discussions on AS programming, it is ascertained that a normative argument can be elicited for each potential course of action in abstract thought experiments. The paper then progresses to discuss factors that may shift the normative claim between self-sacrifice, self-preservation, and randomization in the case of AV programming. The factors identified in this comparison are generalized into guiding dimensions for moral considerations along which all three options of action should be evaluated when programming ASs for dilemmas involving their users.
