Jsaizc-tfg

From jderobot
Jump to: navigation, search

Proyect Card[edit]

Project Name: Drones

Author: Jesús Saiz Colomina [jesussaizcolomina@gmail.com]

Academic Year: 2017/2018

Degree: Degree in Aerospace Engineering

Tags: Deep Learning, Python, JdeRobot, Gazebo, Visual States, Slam-VisualMarkers

GitHub Repositories: Git Jesus Saiz - TFG

State: Developing

Final Practice (Complex Path Pilot) - Visual States + Slam Visualmakers(Autolocation) + Gazebo[edit]


Final Practice - Error Calculations[edit]

Errors of Slam-Visualmakers (simple and complex route)[edit]

Errors of Controller (Following beacons)[edit]

Errors of Pilot (Following Path)[edit]


Final Practice - Visual States + Slam Visualmakers(Autolocation) + Gazebo[edit]


Final Practice - Visual States (Without Autolocation)[edit]


Visual States - Examples (Python and Gazebo)[edit]


Final Practice - Without Autolocation[edit]


Flight real drone[edit]

Tools for flight the real drone on your laptop: UAV Viewer and ArDrone


Precise landing simulated[edit]

I have replicated the code of Jorge vela on my computer to be able to use it in my final algorithm Jorge Vela TFG


First steps[edit]

The first thing is to have the Ubuntu operating system or a virtual machine that allows us to use this system. Once we have this we must install all the necessary programs to use the different applications that will help us to perform our work. For this we have go to Installation. With all this we can begin the practices and begin to familiarize ourselves with this surroundings.


Practice 4 - Cat and Mouse (GAZEBO)[edit]

In this last practice of knowing the surrounding with Gazebo, specifically the drones driving automatize process, we are proposing a game like cat and mouse. Game with the aim that the cat (black drone) is standed behind of the mouse (red drone) and it follows every place it goes, without losing it with the front camera automatically. In order to make this process done I divide the problem in different sections. Firstly, we track with only one axle (to the sides) and when this was correct to continue with the other two separately (forward-backward & up-down). Once I get all separated, we put it together and adjust the velocities so the mouse does not scape. This differs from the previous practice that the mouse drone moves automatically in all axle at the same time and the side of the color filter change depending of the movement which has been made. In this sense we can acknowledge that this is very complex task to be done.


Practice 3 - Follow turtlebot (GAZEBO)[edit]

In the third practice, also using the Simulator Gazebo, we are going to combine both practices, first of all we are carrying out a color filter in order to distinguish the target (in this case a tutlebot) and subsequently, one time is been located, we will track it through motor speed control command according to the movement of the turtlebot. As it is used the lower camera, the command to track the drone will be just: to both side and forwards to center the robot & up and down to maintain the distance of height with the target.


Practice 2 - Navigating a drone by position (GAZEBO)[edit]

In the second practice, using the gazebo simulator, we will perform the control of a drone by its position, it is necessary to establish the initial coordinates and a reference system to locate the drone and from them to be able to handle it and conduct it by the 5 beacons of the practice. Once you achieve the goal of reaching the 5 beacons, the drone must return to the initial position.



Practice 1 - Color Filter[edit]

This practice will help us to have a first contact with both the python program and the use of web libraries as OpenCV, in this practice the objective is through a video in which we have a red ball on a white background to get it a program will recognize this color in front of the others using a filter and can follow the ball through a signpost to indicate where it is. With this we will achieve a color filter that will be very useful for future needs.