School of Automotive Engineering, Iran University of Science and Technology, Tehran, Iran
Abstract: (735 Views)
Ensuring that ethically sound decisions are made under complex, real-world conditions is a central challenge in deploying autonomous vehicles (AVs). This paper introduces a human-centric risk mitigation framework using Deep Q-Networks (DQNs) and a specially designed reward function to minimize the likelihood of fatal injuries, passenger harm, and vehicle damage. The approach uses a comprehensive state representation that captures the AV’s dynamics and its surroundings (including the identification of vulnerable road users), and it explicitly prioritizes human safety in the decision-making process. The proposed DQN policy is evaluated in the CARLA simulator across three ethically challenging scenarios: a malfunctioning traffic signal, a cyclist’s sudden swerve, and a child running into the street. In these scenarios, the DQN-based policy consistently minimizes severe outcomes and prioritizes the protection of vulnerable road users, outperforming a conventional collision-avoidance strategy in terms of safety. These findings demonstrate the feasibility of deep reinforcement learning for ethically aligned decision-making in AVs and point toward a pathway for developing safer and more socially responsible autonomous transportation systems.
Type of Study:
Research |
Subject:
Autonomous vehicles