This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to UAVs (Unmanned Aerial Vehicles) where such decision-making policies are trained upon supervision of actual human flight data.
The neural networks are trained based on aggregated flight data from human experts, learning the implicit policy for visual obstacle avoidance by extracting the necessary features within the image. The images and flight data are collected from a simulated environment provided by Gazebo, and Robot Operating System is used to provide the communication nodes for the framework.
The framework is tested and validated in various environments with respect to four types of neural network including fully connected neural networks, two- and three-dimensional CNNs (Convolutional Neural Networks), and Recurrent Neural Networks (RNNs). Among the networks, sequential neural networks (i.e., 3D-CNNs and RNNs) provide the better performance due to its ability to explicitly consider the dynamic nature of the obstacle avoidance problem.
Read more at: https://link.springer.com/article/10.1007/s42405-020-00254-x