Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6

Autonomous Driving Related Abstracts

6 Optimization Based Obstacle Avoidance

Authors: R. Dariani, S. Schmidt, R. Kasper

Abstract:

Based on a non-linear single track model which describes the dynamics of vehicle, an optimal path planning strategy is developed. Real time optimization is used to generate reference control values to allow leading the vehicle alongside a calculated lane which is optimal for different objectives such as energy consumption, run time, safety or comfort characteristics. Strict mathematic formulation of the autonomous driving allows taking decision on undefined situation such as lane change or obstacle avoidance. Based on position of the vehicle, lane situation and obstacle position, the optimization problem is reformulated in real-time to avoid the obstacle and any car crash.

Keywords: Optimal Control, Path Planning, Autonomous Driving, obstacle avoidance

Procedia PDF Downloads 227
5 Disturbance Observer for Lateral Trajectory Tracking Control for Autonomous and Cooperative Driving

Authors: Christian Rathgeber, Franz Winkler, Dirk Odenthal, Steffen Müller

Abstract:

In this contribution a structure for high level lateral vehicle tracking control based on the disturbance observer is presented. The structure is characterized by stationary compensating side forces disturbances and guaranteeing a cooperative behavior at the same time. Driver inputs are not compensated by the disturbance observer. Moreover the structure is especially useful as it robustly stabilizes the vehicle. Therefore the parameters are selected using the Parameter Space Approach. The implemented algorithms are tested in real world scenarios.

Keywords: Robust Control, Autonomous Driving, Cooperative Driving, disturbance observer, trajectory tracking

Procedia PDF Downloads 409
4 Multipurpose Agricultural Robot Platform: Conceptual Design of Control System Software for Autonomous Driving and Agricultural Operations Using Programmable Logic Controller

Authors: P. Abhishesh, B. S. Ryuh, Y. S. Oh, H. J. Moon, R. Akanksha

Abstract:

This paper discusses about the conceptual design and development of the control system software using Programmable logic controller (PLC) for autonomous driving and agricultural operations of Multipurpose Agricultural Robot Platform (MARP). Based on given initial conditions by field analysis and desired agricultural operations, the structural design development of MARP is done using modelling and analysis tool. PLC, being robust and easy to use, has been used to design the autonomous control system of robot platform for desired parameters. The robot is capable of performing autonomous driving and three automatic agricultural operations, viz. hilling, mulching, and sowing of seeds in the respective order. The input received from various sensors on the field is later transmitted to the controller via ZigBee network to make the changes in the control program to get desired field output. The research is conducted to provide assistance to farmers by reducing labor hours for agricultural activities by implementing automation. This study will provide an alternative to the existing systems with machineries attached behind tractors and rigorous manual operations on agricultural field at effective cost.

Keywords: Autonomous Driving, Agricultural Operations, PLC, MARP

Procedia PDF Downloads 234
3 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: Reinforcement Learning, Autonomous Driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization)

Procedia PDF Downloads 8
2 LanE-change Path Planning of Autonomous Driving Using Model-Based Optimization, Deep Reinforcement Learning and 5G Vehicle-to-Vehicle Communications

Authors: William Li

Abstract:

Lane-change path planning is a crucial and yet complex task in autonomous driving. The traditional path planning approach based on a system of carefully-crafted rules to cover various driving scenarios becomes unwieldy as more and more rules are added to deal with exceptions and corner cases. This paper proposes to divide the entire path planning to two stages. In the first stage the ego vehicle travels longitudinally in the source lane to reach a safe state. In the second stage the ego vehicle makes lateral lane-change maneuver to the target lane. The paper derives the safe state conditions based on lateral lane-change maneuver calculation to ensure collision free in the second stage. To determine the acceleration sequence that minimizes the time to reach a safe state in the first stage, the paper proposes three schemes, namely, kinetic model based optimization, deep reinforcement learning, and 5G vehicle-to-vehicle (V2V) communications. The paper investigates these schemes via simulation. The model-based optimization is sensitive to the model assumptions. The deep reinforcement learning is more flexible in handling scenarios beyond the model assumed by the optimization. The 5G V2V eliminates uncertainty in predicting future behaviors of surrounding vehicles by sharing driving intents and enabling cooperative driving.

Keywords: Connected Vehicles, Path Planning, Autonomous Driving, deep reinforcement learning, lane change, V2V communications

Procedia PDF Downloads 1
1 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: Autonomous Driving, prediction model, physical traffic model, statistical learning process

Procedia PDF Downloads 1