Datasets
Standard Dataset
Source Code of 6G Disaster Relief Paper
- Citation Author(s):
- Submitted by:
- Amir Mohammadisarab
- Last updated:
- Thu, 06/22/2023 - 07:03
- DOI:
- 10.21227/xsb6-ah82
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
This paper investigated how to increase the number of connections among users in hierarchical non-terrestrial networks (HNTNs) assisted disaster relief service (DRS). We aim to maximize the number of satisfactory connections (NSCs) by optimizing the unmanned aerial vehicles (UAV) radio resources, computing resources, and trajectory at each time slot. In particular, the UAVs are exploited as aerial base stations (ABSs) to provide a link for the reduced capability (RedCap) user equipment (UE) based on power domain non-orthogonal multiple access (PD-NOMA). While the disaster condition leads to non-operational terrestrial networks, sending data in the shortest time is crucial for missioncritical (MC) UEs. Hence, the end-to-end (E2E) delay is the quality of service (QoS) constraint. The proposed problem is solved based on a multi-agent recurrent deterministic policy gradient (MARDPG) algorithm, in which the ABSs cooperate to maximize the NSCs and find their optimal policy interacting with the environment. We further consider the sharing experience module (SEM) for the agents that encode actions and observations using long short-term memory (LSTM), which leads to each agent utilizing other agents’ history of actions and observations. In order to prove the superiority of MARDPG, three algorithmic benchmarks, and four different system models are implemented. Numerical results illustrate the impact of different parameters, such as the number of subcarriers, users, and the maximum tolerable E2E delay on the NSCs. In addition, different scenarios reveal that MARDPG achieves better performance than benchmarks, with roughly a 6 percent optimality gap and a 91 percent fairness for achievable rate among users.
It is noticeable that the config of the computer that was used for running the simulations is: It has 48 gigabytes (GB) of random-access memory (RAM), Intel Core i5–11400F up to 4.5 GHz, one terabyte (TB) of hard disk drive (HDD), two 500 GB solid state drives, and NVIDIA GeForce RTX 2080. Learning networks are also implemented in Python 3.7, Tensorflow 2.6 library, and Keras 1.7 library