Citation
Abstract
Unmanned Aerial Vehicles (UAVs) play a critical role in data collection for a wide range of Internet of Things (IoT) applications across remote, urban, and marine environments. In large-scale deployments, UAVs often face complex decision-making challenges, for which Deep Reinforcement Learning (DRL) has emerged as a promising solution. This paper presents a comprehensive review of research on UAV-assisted IoT utilizing DRL, covering key research questions relating to DRL algorithm variants, deployment objectives, architectural features, integrated technologies, UAV roles, optimization constraints, energy management strategies, and performance metrics. Findings indicate that value-based and actor-critic algorithms are the most commonly employed, targeting objectives such as path planning, transmit power control, scheduling, velocity and altitude control, and charging optimization. Other architectural considerations include clustering, security, obstacle avoidance, buffered sensors, and multi-UAV coordination. Beyond data collection, UAVs are also used for tasks such as device selection, data aggregation, and sensor charging, with energy management primarily achieved through charging and energy harvesting techniques. Performance is typically assessed using metrics like energy efficiency, throughput, latency, packet loss, and Age of Information (AoI). The paper concludes by outlining several promising research directions and open challenges critical to the successful deployment of UAVs as aerial communication platforms, especially in IoT data collection. By organizing existing work across key themes and outlining promising future directions, this review offers a valuable reference for researchers and technology professionals alike.
Download File
Full text not available from this repository.
Official URL or Download Paper: https://www.sciencedirect.com/science/article/abs/...
|
Additional Metadata
| Item Type: | Article |
|---|---|
| Subject: | Computer Science |
| Subject: | Electrical Engineering |
| Subject: | Information Technology |
| Divisions: | Faculty of Computer Science and Information Technology |
| DOI Number: | https://doi.org/10.1016/j.vehcom.2025.100949 |
| Publisher: | Elsevier |
| Keywords: | Deep reinforcement learning; Unmanned aerial vehicles; Internet of things; Data collection; UAV-assisted IoT; Path planning; Energy management; DRL algorithms; Wireless communication; Aerial platforms |
| Sustainable Development Goals (SDGs): | SDG 9: Industry, Innovation and Infrastructure, SDG 11: Sustainable Cities and Communities, SDG 3: Good Health and Well-being |
| Depositing User: | Ms. Nur Faseha Mohd Kadim |
| Date Deposited: | 21 Apr 2026 09:02 |
| Last Modified: | 06 May 2026 23:44 |
| Altmetrics: | http://www.altmetric.com/details.php?domain=psasir.upm.edu.my&doi=10.1016/j.vehcom.2025.100949 |
| URI: | http://psasir.upm.edu.my/id/eprint/124655 |
| Statistic Details: | View Download Statistic |
Actions (login required)
![]() |
View Item |
