Cawadallah-tfg

From jderobot
Jump to: navigation, search

Proyect Card[edit]

Project Name: new exercises in TeachingRobotics

Author: Carlos Awadallah Estévez [carlosawadallah@gmail.com]

Academic Year: 2017/2018

Degree: Degree in Audiovisual Systems and Multimedia Engineering.

GitHub Repositories: [1]

Tags: Linux, Python , OpenCV, JdeRobot, Gazebo

TFG:[2]

Jupyter Applied To The project[edit]

The IPython 3.x (or greater) technology, which now we will have to start calling Jupyter Notebook, is an open-source web application that allows us to create and share documents that contain live code, equations, visualizations and narrative text. With this application installed on your computer, you can run your local code through the browser, which puts the foundations on which we will work to build a multiplatform JdeRobot framework. That is the reason why we decided to do some research about this technology.

The beggining of this road will consist of making a version of my developed practices based on Jupyter Notebooks.

Jupyter Notebook - Follow Face[edit]

In my first Jupyter Notebook I have included some of the basic features and widgets that the tool offers, creating different elements among which include: images, text and executable Python code that produces certain outputs. Here is a snapshot of the output that produced the Follow_face algorithm at a certain moment:

Once the code of the solution and node has been migrated to a Jupyter Notebook, I have tested the execution using the specific hardware:

Testing Follow Face Through Jupyter.

Jupyter Notebook - Laser Loc[edit]

This practice combines typical jupyter elements with fully interactive and highly functional widgets. This practice needed a form of representation, so I created a widget through the Matplotlib library that allows to visualize content that is generated in each iteration of the execution. In addition, a teleoperator that connects with the robot was included, in such a way that it can be controlled by the student. It is a static implementation of the original algorithm, due to limitations of the jupyter kernel. The following video shows an example of the execution of the solution:

Testing Laser Loc Through Jupyter.

Jupyter Notebook - Position Control[edit]

The Notebook for this practice also includes some of the basic features and widgets Jupyter have (images, code, lists,...), so the student is guided at all times during the process.

Testing Position Control Through Jupyter.

Jupyter Notebook - Rescue People[edit]

In this notebook, in addition to including support for the practice and some Jupyter widgets to improve the explanation and to be able to execute python code, I have included a small guide to facilitate the student to tackle this task, something more complex, with success. Here goes an example of the execution:

Testing Rescue People Through Jupyter.

Jupyter Notebook - Obstacle Avoidance[edit]

This time, I have used the Jupyter tool to make a notebook of the Obstacle Avoidance practice, detailing the steps for its realization, the API necessary to control the functionality of the robot and other indispensable elements and the composition of the practice. The following video shows an example of the notebook execution:

Testing Obstacle Avoidance Through Jupyter.

Jupyter Notebook - Autopark[edit]

I have implemented a Jupyter Notebook for Autopark practice in a basic version (without referee). You can see an execution of the result in the following video:

Testing Autopark Through Jupyter.

Jupyter Notebook - Global Navigation[edit]

For this practice I have carried out the functionality of Jupyter to the next level. Given that it was necessary to have interactive components that collected mouse events, I used interactive widgets that the tool offers through the built-in magic commands. With this, I have been able to include other elements in the notebook through the matplotlib library, such as graphics or maps, and collect the aforementioned events. An example of the execution of this notebook is shown in the following video:

Testing Global NavigationThrough Jupyter.

Creating new JdeRobot-Academy Practices:[edit]

Once the contact with the Jderobot environment is over, it is the time to give shape to new ideas for Jderobot-Academy practices, implementing for them an academic node with graphic interface, all the necessary support for communication with the involved drivers and a reference solution.


AUTOLOC LASER[edit]

This time, the intention is to create a practice that addresses a real and very common problem in robotics: self-localization. In this practice, the robot will have only a laser sensor and a map of the terrain, in which it will have to be (self)located.

For this I have implemented an academic node, a world of Gazebo (reused among those available in the set of worlds of JdeRobot) and a graphical interface, in addition to the corresponding API to be able to face the practice.

From now on, I'll immerse in the implementation of the self-localization algorithm. I'll follow the "particles filter method" to solve the practice.

Refining The Final Solution[edit]

The data from the odometry sensors provide an increase in the radius and angle of movement of the robot. Therefore, to make the location algorithm more realistic, we need to incorporate this data to the particles generation in the correct way, that is, realtive to the position and especially to the orientation of each particle, instead of doing it in an absolute way (the value obtained in the robot's direction) for all of them. In the new video you can see how each particle incorporates the increase of radius and angle in its orientation, so that if the robot goes to the right, each particle goes to ITS OWN RIGHT, instead of to the right of the robot (since it is assumed that the right of the robot is unknown):

Autoloc Laser - Relative Odometry Data.

Road To Final Solution[edit]

The algorithm performance is arleady the expected , but what if the robot needs to be located while moving? To deal with that, I hace incoporated to the algorithm the information given by the encoders (odometry) of the robot, so that I can update the whole generation of particles' coordinates (x,y,theta) with the robot's changing data. You can see an example of the previous in the next video:

Autoloc Laser - Update Particles' Position and Orientation with Econders' data.

I have incorporated into the code the necessary logic to update the particles when there is a significant change in the robot, so that now the algorithm is a little more robust facing location in motion:

*NOTE: You will see the estimated route in light blue color in the video.

Autoloc Laser v.4 - Self-Location in Motion.

Third Developed Solution[edit]

Once everything works as it should, my efforts have focused on improving the location, with less than 10cm error in all cases.

Autoloc Laser v.3 - REFINED

Second Developed Solution[edit]

The previous version had a too high computational load, so this new version has focused on the optimization of the code. I have created different threads that control the data capture of the sensors, the graphical interface and the processing of the localization algorithm. I have also reduced the beams of the laser sensor used and I have precomputed all the possible code. This is the second version:

Autoloc Laser v.2 - OPTIMIZED

Also, I have tested this version in a different world, more demanding in terms of location since it is a more symmetrical environment. This is the behaviour:

Autoloc Laser v.2 - DIFFERENT WORLD - Two different executions


First Developed Solution[edit]

Once all the decisions about the way to create the new generations of particles have been established, it is only necessary to decide which of them to select and the moment to do it. In this case, I have decided to choose the most probable particle when all of them converge in a radius of 1 meter. This is the first complete solution:

Two Different Execution of Autoloc Laser v.1

Fourth Step[edit]

To start with the task of self-location, I added a button in the GUI that allows me to calculate a new generation of particles. This new generation created is based on the value of the accumulated probability, being that if it is less than a certain threshold, it is randomly resampled, and if it exceeds this threshold, a small thermal noise is applied that allows the zone to be explored. To choose those particles that will have offspring, the roulette algorithm has been followed.

Here goes an example:

INITIAL RANDOM GENERATION

|| AFTER SOME ITERATIONS [LEFT] |||| ACTION OF THE GAUSSIAN NOISE [RIGHT] ||

Since the algorithm starts with random distribution, there are different results according to the execution:

LOCALIZATION 1 VS. LOCALIZATION 2

Therefore, it is verified that it is necessary to save the location status in time, to obtain more data that allows a more precise final location

Third Step[edit]

Before proceeding, I have to refine the space of observations. To do this, I generated a set of particles in the form of a grid, so I was able to verify that the function assigned high probabilities to the particles close to the real position and lowers to the remaining ones (lower image), in addition to adjusting it for a better performance.

Second Step[edit]

Next, I will generate a set of particles with random position and orientation in the whole extension of the map (as long as it is not in an obstacle) and I will do for each of them the ray tracing, which will allow me to calculate its probability of being the real position of the robot. This is the result:

I used the yellow color to express the probability 0 and the red color to express a probability with value 1, so there will be an orange gradation to express the intermediate probabilities. To assign probabilities, a probability function has been implemented based on the error existing between the data of the real sensor and the fictitious or theoretical one.

First Step[edit]

First step consists of doing the ray tracing, that is, generating "theoretical laser data" from a random point. In the following image, the real data (blue) and the theoretical one of a slightly offset point, with the same orientation (red), are superimposed:

Follow Face[edit]

The intention of this new practice is to instruct students in the treatment of images obtained from a robot (in this case a USB camera, through the new JdeRobot tool that allows us to teleoperate the Sony EVI d100p camera, see [1] ]), in order to implement autonomous behavior based on this information. Thus, it will be a matter of segmenting people's faces in the image, and with that, following their face with the hardware.

[1] http://jderobot.org/index.php/Tools#PanTilt_Teleop

Solution v.2[edit]

For this second solution, I have added support to manage what is done when no face is detected in the image. The camera waits 5 seconds in the current position and then starts searching along the x axis, in one or the other direction, until it detects a face again. Detection needs to be polished and improved to better filter objects that are not faces.

GUI 2.0[edit]

I've made a second version of the GUI with the aim of simplifying it. I removed the check that provided the image served by the camera, and I included it in the check that gives access to the segmentation tool, so that everything needed is in the same window. This is the result:

First Developed Solution[edit]

For the next solution, I'll try to refine face detection in order to be able to achieve a more fluid and robust movement.

First Steps of the Solution[edit]

Here goes the summary of the important parts of the code

1. We will use the OPENCV library for the segmentation:

import cv2
import numpy as np

2. We need to add the eye and face detector classifier systems:

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

3. Face and eyes detection:

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30),flags=cv2.CASCADE_SCALE_IMAGE)
            
for (x,y,w,h) in faces:
    cv2.rectangle(input_image_copy,(x,y),(x+w,y+h),(255,0,0),2)
    roi_gray = gray_copy[y:y+h, x:x+w]
    roi_color = input_image_copy[y:y+h, x:x+w]
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:
        cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)


First GUI design[edit]

In the graphical interface I have included, in addition to the typical controls to execute, stop and reset the algorithm, a teleoperator for the camera and two tabs that allow us to visualize the image obtained from the robot and the result of the segmentation.





Upgrading existing practices[edit]

To polish the academic environment of JdeRobot, we will review all the existing practices correcting errors, updating the instructions and the way of communicating with the drivers and standardizing the configuration files. Once done, we will upload a test video.

Color Filter[edit]

Position Control[edit]

Cat&Mouse[edit]

Global Navigation[edit]

Autopark[edit]

Follow Road[edit]

Obstacle Avoidance[edit]

Rescue People[edit]

Labyrinth Escape[edit]

3D Reconstruction[edit]

For this practice, in addition to the actualization of the communications to use JdeRobotComm and to work with the version 5.6.x, I have changed the GUI to make it more intuitive and simple, and the result is the following:

  • A solution has not been implemented, in the absence of updating the 3D viewer involved in the practice

Stop at T joint[edit]

Testing new components[edit]

Some new components have been included in JdeRobot's 5.6.0 version, such as Evicam_driver + pantilt_teleop that allows to manage a Sony Evi cam connected via USB. This is the first contact with the tool:

Initial contact[edit]

  • JdeRobot Examples:

Running examples to check the operation of the components (driver+tool).

Component timing
Cameraserver + Cameraview 00:00 - 00:15
Cameraserver + Opencvdemo 00:16 - 01:36
Simulated Kobuki + KobukiViewer 01:37 - 03:04
Simulated ArDrone + UAVViewer 03:04 - 04:39


  • Testing JdeRobot-Academy Excersises: