Search results for: DDPG
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4

Search results for: DDPG

4 Trajectory Design and Power Allocation for Energy -Efficient UAV Communication Based on Deep Reinforcement Learning

Authors: Yuling Cui, Danhao Deng, Chaowei Wang, Weidong Wang

Abstract:

In recent years, unmanned aerial vehicles (UAVs) have been widely used in wireless communication, attracting more and more attention from researchers. UAVs can not only serve as a relay for auxiliary communication but also serve as an aerial base station for ground users (GUs). However, limited energy means that they cannot work all the time and cover a limited range of services. In this paper, we investigate 2D UAV trajectory design and power allocation in order to maximize the UAV's service time and downlink throughput. Based on deep reinforcement learning, we propose a depth deterministic strategy gradient algorithm for trajectory design and power distribution (TDPA-DDPG) to solve the energy-efficient and communication service quality problem. The simulation results show that TDPA-DDPG can extend the service time of UAV as much as possible, improve the communication service quality, and realize the maximization of downlink throughput, which is significantly improved compared with existing methods.

Keywords: UAV trajectory design, power allocation, energy efficient, downlink throughput, deep reinforcement learning, DDPG

Procedia PDF Downloads 107
3 AI-based Radio Resource and Transmission Opportunity Allocation for 5G-V2X HetNets: NR and NR-U Networks

Authors: Farshad Zeinali, Sajedeh Norouzi, Nader Mokari, Eduard Jorswieck

Abstract:

The capacity of fifth-generation (5G) vehicle-to-everything (V2X) networks poses significant challenges. To ad- dress this challenge, this paper utilizes New Radio (NR) and New Radio Unlicensed (NR-U) networks to develop a heterogeneous vehicular network (HetNet). We propose a new framework, named joint BS assignment and resource allocation (JBSRA) for mobile V2X users and also consider coexistence schemes based on flexible duty cycle (DC) mechanism for unlicensed bands. Our objective is to maximize the average throughput of vehicles while guaranteeing the WiFi users' throughput. In simulations based on deep reinforcement learning (DRL) algorithms such as deep deterministic policy gradient (DDPG) and deep Q network (DQN), our proposed framework outperforms existing solutions that rely on fixed DC or schemes without consideration of unlicensed bands.

Keywords: vehicle-to-everything (V2X), resource allocation, BS assignment, new radio (NR), new radio unlicensed (NR-U), coexistence NR-U and WiFi, deep deterministic policy gradient (DDPG), deep Q-network (DQN), joint BS assignment and resource allocation (JBSRA), duty cycle mechanism

Procedia PDF Downloads 68
2 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 119
1 Deep Reinforcement Learning with Leonard-Ornstein Processes Based Recommender System

Authors: Khalil Bachiri, Ali Yahyaouy, Nicoleta Rogovschi

Abstract:

Improved user experience is a goal of contemporary recommender systems. Recommender systems are starting to incorporate reinforcement learning since it easily satisfies this goal of increasing a user’s reward every session. In this paper, we examine the most effective Reinforcement Learning agent tactics on the Movielens (1M) dataset, balancing precision and a variety of recommendations. The absence of variability in final predictions makes simplistic techniques, although able to optimize ranking quality criteria, worthless for consumers of the recommendation system. Utilizing the stochasticity of Leonard-Ornstein processes, our suggested strategy encourages the agent to investigate its surroundings. Research demonstrates that raising the NDCG (Discounted Cumulative Gain) and HR (HitRate) criterion without lowering the Ornstein-Uhlenbeck process drift coefficient enhances the diversity of suggestions.

Keywords: recommender systems, reinforcement learning, deep learning, DDPG, Leonard-Ornstein process

Procedia PDF Downloads 106