JulioVega PhD

From jderobot
Jump to: navigation, search

Wiki page dedicated to describe all the work carried out for the Julio Vega's doctoral thesis.

Contents

People[edit]

  • Julio Vega (julio [dot] vega [at] urjc [dot] es)
  • José María Cañas (jmplaza [at] gsyc [dot] es)

Development[edit]

Dissertation writing phase (2018)[edit]

2018.09.21 Thesis defense[edit]

It is available here

2018.07.24 Final deposit of the thesis[edit]

It is available here

2018.07.01-20 Thoroughly reviewing thesis document for final deposit[edit]

2018.06.15-27. Reviewing and translating thesis document into English to deposit draft version[edit]

2018.06.08. New experiments with US, IR and camera sensors[edit]

PiBot bump and go using ultrasonic sensor[edit]

PiBot follow line using infrared sensor[edit]

PiBot follow line using vision[edit]

2018.06.05. First version of Chapter 4. Teaching Robotics framework[edit]

2018.05.29. First version of Chapter 3. Robots with vision[edit]

2018.05.22. First version of Chapter 2. State of art[edit]

2018.05.15. First version of Chapter 5. Conclusions[edit]

2018.05.08. First version of Chapter 1. Introduction[edit]

Teaching Robotics (2014-2018)[edit]

2018.05.06. Demonstrating new version features of the visual sonar system[edit]

As we can see on this video, the robot is closer to the obstacles than last time. This is because of the new tilt angle which lets robot detect obstacle from 90mm to 650mm. Previously, the closest obstacle it could detect was 250mm. Furthermore, the system is faster, taking only 89ms to get frontier.

2018.05.05. New version of the visual sonar system[edit]

A new version of the visual sonar has been developed in order to fix some issues:

Visual sonar fixed issues
Camera tilt The camera tilt has been modified. Now it's set to 59 degrees
Obstacles detection distance Thanks to the new inclination, the system is able to detect obstacles from 100mm to 650mm
3D scene visualizer It has been improved to be more intuitive with 50-multiples distances (in mm)
Faster frontier algorithm The system took 12-18 sec. to get frontier. Now it needs only 89ms!
Improved loops All loops has been redesigned to not waste iterations

Here we can see the aspect of the new PiBot with the camera tilt inclined:

And it has been tested successfully. Here we see some examples:

100mm distance:

250mm distance:

400mm distance:

550mm distance:

2018.05.02. Avoiding obstacles using visual sonar and benchmark[edit]

On this video we are using the visual sensor, the PiCam, to avoid green obstacles. We consider obstacles are on the floor, thus, we can estimate distances using a single camera, matching optical rays with ground plane (Z = 0). In this case, the system intersects rays with the obstacle frontier previously got as following: original image -> rgb filter -> hsv filter -> green filter -> frontier own algorithm -> backproject frontier -> frontier 3D memory -> calculate min distance to it.

Once we have benchmarked the system, we get the following timing:

1st. Load Pin Hole camera model: 0.5 ms

2nd. Initialize PiCam: 0.5 s

3rd. Filter image till getting B/W obstacles filtered: 35 ms

4th. Get 2D frontier and backproject it onto the ground: 12 - 18 s

5th. Get min distance to 3D frontier and command motors properly: 0.5 s

So, as we can see and we could guess, the hard process is getting the 2D frontier and backproject it to the 3D ground.

2018.05.01. Vision system robustness tests[edit]

After arranging several issues about the translation between the different coordinate systems we have in our vision system: optical system, world system and virtual representation system (PyGame) with an scaled world, we have tested the robustness of the system through a series of images with objects placed at different distances exactly measured. The results are plausible:

300mm distance:

400mm distance:

500mm distance:

600mm distance:

700mm distance:

900mm distance:

2018.04.30. New enriched 3D visualizer using PyGame library[edit]

The library PyGame is a Free and Open Source Python programming language library for making multimedia applications like games built on top of the excellent SDL library. Like SDL, pygame is highly portable and runs on nearly every platform and operating system.

We use it because it offers simple functions to draw lines, circles (points) and it runs with its own graphic engine. It's so light, ideal for the Raspberry Pi.

So, the new aspect of the 3D scene visualizer results in a colorful, intuitive and friendly window:

2018.04.21. Right Ground frontier 3D points[edit]

After a re-calibrate process, setting correctly some camera parameters and doing a deep effort to translate properly from a image pixel to backproject it and then translate it to another coordinate system in order to draw the world over a simple 2D images, we finally get a right ground frontier 3D points, which we can see on the next figure. We consider 1 pixel on the figure corresponds to 1 mm on the real world.

2018.04.18. Wrong Ground frontier 3D points[edit]

After getting the frontier filtered points we can see in the last post, they are back-projected onto the ground floor, because we are assuming that objects (obstacles) are in Z = 0 plane. Following the Pin-hole camera model we designed for the PiCam, which includes several matrix operations and projective geometry techniques, we finally get the picture. It includes the 3D resultant ground-points (Z = 0), but the camera lens model must need some re-adjustment, probably it has to be with distortion parameters, which we assumed 0. To be continued...

2018.04.15. Frontier image filtered[edit]

The original image got from the PiCam is filtered to HSV scale, then an hsv-color filter is applied to get green obstacles filtered and shown in a black and white which it is tracked in a columns bottom-up algorithm, where each pixel is evaluated with respect to its eight neighbors, to see whether it is a border point between ground and object (assuming is on the ground) or not.

2018.04.10. Show camera extrinsic parameters in a 3D Scene[edit]

In order to visualize the camera extrinsic parameters, despite we have no controlled motor-encoders yet, a 3D plotter has been developed. With this tool we can load a simulation of camera movements through coordinates introduced manually on a configuration file, and we can see the 3D scene rendered with the different camera poses. The board represents the chess board which we used to calibrate the PiCam.

2018.04.08. Back-projecting pixels to 3D[edit]

We achieved back-project 2D to 3D pixels using our Pin-hole camera class and some OpenCV functions to draw the back-projection line. We get the first black pixel below the image to estimate depth. The result is as following.

2018.04.06. Modeling the scene[edit]

We have introduced a 3D scene renderer under OpenGL in our purpose of showing a full camera model with the ray-tracing got from the objects surrounding the robot.

Our first purpose is to develop a visual-sonar to design a complete navigation algorithm avoiding obstacles using a single sensor: vision.

But, after several tries, we observe OpenGL was so complex, so we changed to use OpenCV drawing functions.

2018.03.31. Pin-Hole camera model class[edit]

We have been developing a new class for JdeRobot-Kids middleware, called "PinholeCamera.py". It includes all necessary methods to work with a Pin-hole camera model as well as matrices. The class doc is attached below:

NAME
    PinholeCamera

FILE
    /JdeRobot-Kids/visionLib/PinholeCamera.py

DESCRIPTION
    ###############################################################################
    # En este fichero se implementa la clase correspondiente a una camara modelo  #
    # Pin Hole, o camara ideal. Los parametros para el manejo de una camara       #
    # y la calibracion de esta se recogen en las matrices K, R, P (incluye T) que,#
    # inicialmente seran 0 (antes de efectuar la calibracion)                     #
    ###############################################################################
    # Matrices o parametros de calibracion:
    # A) Image sin distorsionar: requiere D y K
    # B) Imagen rectificada: requiere D, K y R
    ##############################################################################
    # Dimension de las imagenes de la camara; normalmente es la resolucion (en 
    # pixels) de la resolucion de la camara: height y width
    ##############################################################################
    # Descripcion de las matrices:
    # Matriz de parametros intrinsecos (para imagenes distorsionadas)
    #     [fx  0 cx]
    # K = [ 0 fy cy]
    #     [ 0  0  1]
    # Matriz de rectificacion de camara (rotacion): R
    # Matriz de proyeccion o matriz de camara (incluye K y T)
    #     [fx'  0  cx' Tx]
    # P = [ 0  fy' cy' Ty]
    #     [ 0   0   1   0]
    ##############################################################################

CLASSES
    PinholeCamera
    
    class PinholeCamera
     |  Una camara modelo Pin Hole es una camara ideal.
     |  
     |  Methods defined here:
     |  
     |  __init__(self)
     |  
     |  getCx(self)
     |  
     |  getCy(self)
     |  
     |  getD(self)
     |  
     |  getFx(self)
     |  
     |  getFy(self)
     |  
     |  getK(self)
     |  
     |  getP(self)
     |  
     |  getR(self)
     |  
     |  getResolucionCamara(self)
     |  
     |  getTx(self)
     |  
     |  getTy(self)
     |  
     |  getU0(self)
     |  
     |  getV0(self)
     |  
     |  proyectar3DAPixel(self, point)
     |      :param point:     punto 3D
     |      :type point:      (x, y, z)
     |      Convierte el punto 3D a las coordenadas del pixel rectificado (u, v),
     |      usando la matriz de proyeccion :math:`P`.
     |      Esta funcion es la inversa a :meth:`proyectarPixelARayo3D`.
     |  
     |  proyectarPixelARayo3D(self, uv)
     |      :param uv:        coordenadas de pixel rectificadas
     |      :type uv:         (u, v)
     |      Devuelve el vector unidad que pasa por el centro de la camara a traves 
     |      del pixel rectificado (u, v), usando la matriz de proyeccion :math:`P`.
     |      Este metodo es el inverso a :meth:`proyectar3DAPixel`.
     |  
     |  rectificarImagen(self, raw, rectified)
     |      :param raw:       input image
     |      :type raw:        :class:`CvMat` or :class:`IplImage`
     |      :param rectified: imagen de salida rectificada
     |      :type rectified:  :class:`CvMat` or :class:`IplImage`
     |      Aplica la rectificacion especificada por los parametros de camara 
     |      :math:`K` y :math:`D` a la imagen `raw` y escribe la imagen resultante 
     |      en `rectified`.
     |  
     |  rectificarPunto(self, uv_raw)
     |      :param uv_raw:    coordenadas del pixel
     |      :type uv_raw:     (u, v)
     |      Aplica la rectificacion especificada por los parametros de camara
     |      :math:`K` y :math:`D` al punto (u, v) y retorna las coordenadas del
     |      pixel del punto rectificado.
     |  
     |  setPinHoleCamera(self, K, D, R, P, width, height)

FUNCTIONS
    construirMatriz(rows, cols, L)

2018.03.21. Calibrating PiCam[edit]

Intrinsic parameters[edit]

According to the manufacturer's manual, the PiCam (v2.1 board) technical intrinsic parameters are as following:

PiCam (v.2 board) technical intrinsic parameters
Sensor type Sony IMX219PQ[7] Color CMOS 8-megapixel
Sensor size 3.674 x 2.760 mm (1/4" format)
Pixel Count 3280 x 2464 (active pixels) 3296 x 2512 (total pixels)
Pixel Size 1.12 x 1.12 um
Lens f=3.04 mm, f/2.0
Angle of View 62.2 x 48.8 degrees
Full-frame SLR lens equivalent 29 mm

Extrinsic parameters[edit]

PiCam calibration app[edit]

Despite having the intrinsic parameters provided by the manufacturer, and since not all cameras have the same factory calibration, we have developed an app called PiCamCalibration.py to get these values. It uses several OpenCV functions to get the K matrix finally.

Firstly we goty 10 test patterns for camera calibration using a chess board. To find pattern in chess board we use the function cv2.findChessboardCorners() and to draw the pattern we use cv2.drawChessboardCorners() as we can see in the next figure.

So now we have our object points and image points we are ready to go for calibration. For that we use the function cv2.calibrateCamera(). It returns the camera matrix, distortion coefficients, rotation and translation vectors etc.

We show an screen-shot below of the output of our software.

2018.03.17. Reviewing vision concepts[edit]

We want PiBot to support vision algorithms using the single PiCam camera on board. That's why, we need to introduce some advanced concepts in Geometry.

Pin-Hole camera model[edit]

Firstly, the camera model needs to be implemented. The camera model we are going to use is an ideal camera, called Pin-hole camera. When we take an image using a pin-hole camera, we loose an important information, i.e. depth of the image. Or even how far is each point in the image from the camera, because it is a 3D-to-2D conversion. So it is an important question whether we can find the depth information using cameras. And the answer is to use more than one camera. Our eyes works in similar way where we use two cameras (two eyes) which is called stereo vision. And OpenCV provides lots of useful functions in this field.

The image below shows a basic setup with two cameras taking the image of same scene (from OpenCV).

Epipolar Geometry[edit]

As we are using a single camera, we can’t find the 3D point corresponding to the point x in image because every point on the line OX projects to the same point on the image plane. With two cameras we get these two images and we can triangulate the correct 3D point. In this case, to find the matching point in other image, you don't need to search the whole image, but along the epiline. This is called Epipolar Constraint. Similarly all points will have its corresponding epilines in the other image. The plane XOO' is called Epipolar Plane.

Ground hypothesis[edit]

If we want to estimate distances from the robot to the surrounding objects, the key question is how can I get a depth map with a single camera? Or, in other words, how can I find the 3D point corresponding to the point x in the image above with a single camera?

To do that, and following the Ground Hypothesis, we consider that all the objects are on the floor, on the ground plane, which we know where is (on plane Z = 0). That way, we can find the 3D point corresponding to every point in the single image, as we can see in the image below.

So, if we get the pixels corresponding to the filtered border below the obstacles we can get the distance to them.

Homogeneous image coordinates[edit]

To compute the rays back-projecting those pixels, the position of the camera needs to be calculated. A camera is defined and positioned according to its matrices K * R * T, where:

- K (3x3) = intrinsic parameters

- R (3x3) = camera rotation

- T (3x1) = translation vector of the camera in X, Y, Z:

- X = forward movement

- Y = shift to the left

- Z = upward movement

(always looking from the point of view of what we are moving)

Also, we can use a single matrix that includes RT, and would therefore be 4x4, as we see below. They always follow the same form.

So, given a point in ground plane coordinates Pg = [X, Y, Z], its coordinates in camera frame (Pc) are given by:

Pc = R * Pg + t

The camera center is Cc = [0, 0, 0] in camera coordinates. In ground coordinates it is then:

Cg = -R' * t, where R' is the transpose of R.

(Assuming, as we saw before, for simplicity, that the the ground plane is Z = 0.)

Let K be the matrix of intrinsic parameters (we will see proximately). Given a pixel q = [u, v], write it in homogeneous image coordinates Q = [u, v, 1]. Its location in camera coordinates (Qc) is:

Qc = Ki * Q

where Ki = inv(K) is the inverse of the intrinsic parameters matrix. The same point in world coordinates (Qg) is then:

Qg = R' * Qc -R' * t

All the points Pg = [X, Y, Z] that belong to the ray from the camera center through that pixel, expressed in ground coordinates, are then on the line

Pg = Cg + theta * (Qg - Cg)

(for theta going from 0 to positive infinity.)

Using matrices to save camera position[edit]

Introduction[edit]

If we have the camera moved with respect to several axes, we must multiply the matrices one by another, once and again. For example, for a complete translation of the camera:

1st. If we have encoders with information of X, Y and Theta, the robot is moved with respect to the absolute (0, 0) of the world, and rotated with respect to the Z axis, so the RT matrix of the robot would be:

2nd. If we have a Pan-Tilt unit for the camera, it will be moved with respect to the base of the robot (which is on the ground level).

3rd. The Tilt axis is translated in Z with respect to the base of the Pan-Tilt, and rotated with respect to the Z axis according to Pan angle.

4th. The Tilt axis is also rotated with respect to the Y axis according to the Tilt angle.

5th. The optical center of the camera is translated in X and in Z with respect to the Tilt axis.

Using matrices[edit]

To obtain the absolute position of the camera in the world we multiply the previous matrices as following:

temp1 = 1st * 2nd

temp2 = temp1 * 3rd

temp3 = temp2 * 4th

temp4 = temp3 * 5th

We already have got in temp4 [0,3] [1,3] [2,3] the absolute position X, Y, Z of the camera.

Expressing camera position as Position + FOA (Focus Of Attention)[edit]

If we express the camera as "POSITION + FOA" we will have to add to the previous resultant matrix ("temp4") the relative FOA of the camera, which is saved in a 4x1 vector. For example, if "relativeFOA" is:

Then, the absolute FOA will be expressed (and saved) in a 4x1 vector ("absoluteFOA") according to:

absoluteFOA = temp4 * relativeFOA

2018.03.08. Starting from scratch with the new FeedBack 350º Servos[edit]

I connect the servos correctly; the servo requires an input control signal sent from the microcontroller via the white wire connection. But, what I can't, is reading the feedback information properly.

Following the manufacturer's manual, the servo sends a feedback output signal to your microcontroller via the yellow wire connection. This signal, labeled tCycle in the diagrams and equations below, has a period of 1/910 Hz (approx. 1.1 ms), +/- 5%.

Within each tCycle iteration, tHigh is the duration in microseconds of a 3.3 V high pulse. The duration of tHigh varies with the output of a Hall-effect sensor inside of the servo. The duty cycle of this signal, tHigh / tCycle, ranges from 2.9% at the origin to 97.1% approaching one clockwise revolution.

Duty cycle corresponds to the rotational position of the servo, in the units per full circle desired for the application.

For example, a rolling robot application may use 64 units, or “ticks” per full circle, mimicking the behavior or a 32-spoke wheel with an optical 1-bit binary encoder. A fixed-position application may use 360 units, to move and hold the servo to a certain angle.

The following formula can be used to correlate the feedback signal’s duty cycle to angular position in your chosen units per full circle.

Duty Cycle = 100% x (tHigh / tCycle). Duty Cycle Min = 2.9%. Duty Cycle Max= 97.1%.

                                      (Duty Cycle - Duty Cycle Min) x units full circle

Angular position in units full circle = _________________________________________________________________________________

                                             Duty Cycle Max - Duty Cycle Min + 1

2018.03.03. Introducing the new PiBot v3.0[edit]

I've started from scratch with the PiBot prototype in order to add the new FeedBack 350º Servos (by Parallax) which include encoders.

PiBot v3.0 new features
Added new servos Added 3 x Parallax Feedback 360° High Speed Servos. They're light-duty high-speed continuous rotation servos with encoders.
Removed old features Removed the old and unused Arduino add-ons such as battery pack, Arduino and proto boards, and -of course- the old servos.
Light-weight (0.5kg) As a result of the above, we get a new compact design, more powerful and lighter prototype for PiBot.

The following shows the design process described with images.

First steps: removing all components

Adding new Servos and wheels on board

Last steps: Mounting Battery, PiCam and Raspberry Pi 3, hiding tedious cables and weighting it

Result: the new PiBot v3.0

2018.03.01. JdeRobot-Kids as a multiplatform Robotics middleware[edit]

On this video we can see an example of how we can use JdeRobot-Kids with different robotic platforms, such as mBot real or Gazebo-simulated, and piBot (Raspberry Pi Robot).

We just need to change the robot platform field in the configuration file and the application (written in Python language) over JdeRobot-Kids middleware, will work in the same way for all the platforms described previously.

2018.02.18. PiBot following a blue ball[edit]

We have developed a new algorithm called "FollowBall". It extracts a filtered colored object, gets its position and then it applies a PID controlled algorithm in order to manage properly the two motors. The goal is to properly guide the direction of the robot to keep the pursued object in the center of the current frame at all times.

2018.02.11. PiBot v2.0 (full equipment) working[edit]

On this video, we can see the PiBot v2.0 (full equipment) working. It is equiped with an ultrasonic sensor, two continuous servos to drive left and right wheels, and a turret-PiCamera mounted on a 180º simple servo.

This way we have already reached the limit of voltage pins, since we have used the four voltage pins (2x5V + 2x3.3V) that Raspberry Pi3 has available, although there are many other more GPIO pins.

2018.02.10. New design of the PiBot v2.0 (full equipment) for using JdeRobot-Kids API[edit]

We have been designing and making the new PiBot model to take advantage of the JdeRobot-Kids infrastructure and have a fully functional robot. We run into the limitation of the four voltage pins available in the Raspberry Pi3.

It is equiped with an ultrasonic sensor, two continuous servos to drive left and right wheels, and a turret-PiCamera mounted on a 180º simple servo.

2018.02.09. Testing how to define and use abstract base classes for JdeRobot-Kids API[edit]

Our purpose is to develop the next schema, using JdeRobot-Kids on the top level as an abstract class called "JdeRobotKids.py", which is the common API for young students, and it is defined in a set of subclasses: "mBotReal", "mBotGazebo" and "piBot". This library is used by the user, through his/her own code in a python file (e.g. "myAlgorithm.py").

That's why we have started testing simple classes using the "abc" library (Abstract Base Classes). It works by marking methods of the base class as abstract, and then registering concrete classes as implementations of the abstract base.

This example code can be downloaded from here

(Bonus track) If you need to include the import files in different folders, you cant't (by default) because Python only searches the current directory, the directory that the entry-point script is running from. However, you can add to the Python path at runtime:

import sys
sys.path.insert(0, './mBotReal')
sys.path.insert(0, './mBotGazebo')
sys.path.insert(0, './piBot')

2018.02.08. Introducing the new robotic platform: piBot[edit]

On this video we introduce the new robotic platform, which we've called piBot because it uses the Raspberry Pi as a main-board instead of Arduino.

As we can see, we control two continuous rotation servos using a wireless USB keyboard through the Raspberry Pi, using the GPIO ports and coded in Python.

2018.02.04. New device supported on Raspberry Pi 3: RC Servo[edit]

This new device, a RC 180 degrees servo, has been programmed in Python language and as we can see on this video it's able to move from the center position (90 degrees) to 0 and 180 degrees (approximately), using the Raspberry Pi 3 GPIO ports.

2018.02.01. New sensor supported on Raspberry Pi 3: Ultrasonic[edit]

We have adapted the ultrasonic sensor through electronics to be supported by the GPIO ports of the raspberry pi 3. Using an application written in Python we can read the values sensed by this sensor.

The output signal of this sensor is rated at 5V, but the input GPIO-pin on the Raspberry is rated at 3.3V. So, sending a 5V signal into a 3.3V input port could damage the Raspberry card or, at least, this pin. That's why we've implemented a voltage divider circuit with two resistors in order to lower the sensor output voltage to our Raspberry Pi GPIO port.

We can see the results on the next video:

2018.01.27. New PiCam ICE-driver developed[edit]

We have developed a new driver, called PiCamServer, which supports the Raspberry Pi 3 camera (a.k.a. PiCam). This driver is a server which serves images through ICE communications.

On the next video, we can see how the new driver can serve images from a webcam or a PiCam, just changing the device in the server configuration file, so that we can get these images using a simple images-client tool under ICE protocol.

2018.01.23. Testing plug-in mBot under gazeboserver using a basic COMM tool[edit]

We can command a Gazebo-simulated mBot robot using a COMM communications between gazeboserver and a basic written-in-Python Qt tool.

2018.01.13. New ball tracking: suitable for ICE and COMM communications[edit]

We have improved the compatibility of our application in order to get images from a ICE or COMM server, which let it communicate with ROS and JdeRobot middle-wares.

Furthermore, the behavior of our algorithm has been also customized for getting the maximum colored object. We can see how it works with two similar objects in this video.

2018.01.06. Ball tracking using Python and OpenCV[edit]

Once we know how to convert BGR image to HSV, we can use this to extract a colored object. In HSV, it is easier to represent a color than in RGB color-space. In our application, we extract a blue colored object. The method is as following:

- Take each frame of the video.
- Convert from BGR to HSV color-space.
- We threshold the HSV image for a range of blue color.
- Now extract the blue object alone, we can do whatever on that image we want.

We can see the result in this video. The output indicates the ball position in every moment.

2017.12.20. Getting coloured objects from an image[edit]

We use Python programming language and OpenCV to create masks and get coloured objects from an image. The goal is to easy the "getColouredObject" function to younger students, under JdeRobot-kits platform.

There are lots of color-space conversion methods available in OpenCV. But we will look into only two which are most widely used ones, BGR -> Gray and BGR -> HSV.

For BGR -> Gray conversion we use the flags cv2.COLOR_BGR2GRAY. Similarly for BGR -> HSV, we use the flag cv2.COLOR_BGR2HSV. To get other flags, just run following commands in your Python terminal:

>>> import cv2
>>> flags = [i for i in dir(cv2) if i.startswith('COLOR_')]
>>> print flags

Once we have got the right range of a color, i.e. blue, we can extract a colored object as we can see on this video:

2017.10.25. Pan-Tilt Gripper PI-mBot prototype[edit]

I have just finished the new mBot prototype, adding the new huge feature: a Raspberry Pi 3, which is mounted and running thanks to an external 10.000 mAh battery.

2017.10.27. Pan-tilt gripper mBot in action[edit]

In this video we can see the new mBot prototype working with three servos: pan-tilt (2) and gripper (1). The code is Arduino-IDE, through the mBot plug-in. To be continued in Python and Arduino-Jderobot...

2017.10.25. New mBot prototype: pan-tilt gripper[edit]

I have just finished the new mBot prototype, adding new features thanks to a gripper mounted over a pan-tilt unit. I am using three servos through two RJ25 adapters (with two slots per each). The result is as following:

2017.10.08. PiCamera working using Python code[edit]

After several failed tries, PiCamera is working under Raspbian OS (Stretch), using Python code.

2017.02.08. Bluetooth Arduino mBot navigation, programmed in Python[edit]

In this video we can see a mBot robot Arduino navigating avoiding obstacles thanks to its us sensor which is recognizing surroundings all the time and it is able to correct wheels according to the circumstances. It is programmed in Python language, and it is linked to the PC through the Bluetooth receiver.

2016.11.27. Arduino with led, trigger and ultrasonic components on board[edit]

We are translating all of our Arduino codes into Python apps. We can see here two examples: led with trigger and ultrasonic sensor.

2016.09.27. Robot Arduino navigation avoiding obstacles through its us sensor[edit]

In this video we can see a robot Arduino navigating avoiding obstacles thanks to its us sensor which is recognizing surroundings all the time and it is able to correct wheels according to the circumstances.

2016.09.18. Robot Arduino controlled with Python language and using a GUI library called Tk[edit]

We have finally got a completely functional Arduino robot using Python language, through the Firmata library. And also, we have included a new GUI library called Tk which is used to display a pair of useful sliders in order to control left and right servos.

We can see the procedure of launching a Python-Arduino program (and the typical errors as well) on the first video. The second one shows the Arduino platform behavior.

2016.07. Course. Teaching: Development of computational thinking through programming and robotics[edit]

I was grant holder by the Ministry of Education to attend a course about teaching the development of computational thinking through programming and robotics.

I enjoyed an stay in Valencia, learning about the actual panorama of Education in Robotics in Secondary Education around Spain.

We were discussing around different topics such as the ethics in the use of New Technologies, and also we were developing different projects, using several Robotics platforms we can use for educational purposes: Scratch, Lego WeDo, Arduino, App Inventor, Python, etc.


And finally, Cynthia Solomon was talking about how Technologies have changed from the first programming language, which she created, LOGO. She was telling us the keys for a good programming language focusing in young students.

2016.06.19. Looking for an educational platform using Arduino or Gazebo with Python[edit]

After a long break, we are back. We continue developing an education platform in order to ease the robotics programming learning to young learners; secondary education. Our experience on this educational level through these years, and also the foreground after an stay on Finland, let us know much more about the young learners difficulties when they start to program robots. So we have decided to focus on Python programming language, as well as jderobot as robotics platform middleware, using ArDrone (under Gazebo simulator) and Arduino as robotics hardware.

Today, I have got how to communicate Arduino board with a PC, using Python as a programming Language instead of Arduino IDE. It is thanks to the "pyFirmata" library.

The Python code to turn on/off a LED with a trigger can be found here and the real Arduino board scheme and behaviour is shown on the next video:

2016.03.16. Testing a plausible simulator for young beginner-in-programming studentst: Gazebo 3D simulator and Kobuki Robot Platform[edit]

Gazebo is a famous 3D Robotics Simulator, which offers a very wide range of robotics platforms simulated under its infrastructure. That's why is a nice idea to use it for teaching Robotics with the youngest students.

Here we have test it with Kobuki Robot, whose real platform we got in our Robotic Group.


Course 2015-2016. Teaching stay in Finland: teaching innovation[edit]

On this course, my project "Using Robotics as a transversal tool to teach several subjects in Secondary Education" was grant holder by the European Commission.

I enjoyed an stay in Finland learning from the Finish Educational System, focusing in the use of Robotics to teach several subjects in Secondary Education and University.

All developments and research can be found here.

The official web site of the project is hosted in the European Commission host here

2014.12.12. Introduction of our paper about our experience using Piaget theories[edit]

We introduce this paper in JITICE at Rey Juan Carlos University.

2014.09.17. Introduction of paper about Gazebo+JdeRobot[edit]

We introduce this paper in CUIEET, workshop at Almaden.

2014.09.10. Searching for educational platforms: Scratch, Lego, LabView[edit]

Testing VisualHFSM. Kobuki fails loading models and meshes.

Starting with Introrob on Python.
Testing alternatives: PyRobot (pyrorobotics.com), PyMite, SimRobot.

Installing and testing Scratch 2.0 on a 64bits computer.
Testing Lego simulator from U. Paderborn (http://ddi.uni-paderborn.de/en/software/lego-mindstorms-simulator.html) and jmeSim from the U. of New South Wales.

2014.06.04. Cameraview client on Python[edit]

Creating an Ice-Python example.
Creating an image server on Python.
Creating the Cameraview client on Python.
Introduction to VisualHFSM.

2014.05.14. Last thesis topic to be covered: Robotics in Education[edit]

After a "working break", we continue this PhD. We start studying the state of art around this topic, because the goal of this last stage is closing high level robotic programming algorithms to secondary education level.
We choose Python as an easy-to-learn programming language. We start to code different basic programs.
We start to prepare a new paper, where we will show our experience of this two years working as a robotic teacher on secondary education, using Piaget constructivism theories.

2013.10.05. Introduction to the national award to the best teaching innovation XIV Ciencia en Acción, CSIC (Spain)[edit]

Visual Memory (2011-2012)[edit]

2012.07.06. Fail searching equivalent point at right image[edit]

2012.07.04. Testing vergency algorithm[edit]

2012.07.02. Testing epipolar algorithm[edit]

2012.06.07. El Mundo interview. Robots to help Alzheimer's patients[edit]

(Move forward min. 1:25)

2012.06.01. High school lecture. Hints and tips for pre-university students[edit]

2012.05.24. Visual memory behaviour with small objects[edit]

2012.05.24. Visual memory behaviour with big objects[edit]

2012.05.07. Visual attention system with predictions[edit]

In this video, our system is able to predict segments and parallelograms in memory. We can see how it recognizes several objects such as hard disk, phone or ipod, and how it instantly removes them from memory when they've disappeared from scene.

2012.05.03. Parallelograms in memory[edit]

Now we're using a clean environment with usual objects such as: mobile phone, pads, etc. The tilt angle has been modified in order to avoid certain tilt movements.

2012.02.26. Visual memory using "giraffe" device[edit]

Here we can see how our system is able to recognize segments from scene around robot. The main difficulty is to use the real "giraffe" device, including pan & tilt movements to the geometric model.

2011.11.22. Short term memory[edit]

Parallelograms guides the attention system. We can see a blue cross on the floor, which symbolize where the camera is pointing. It has to match with the parallelogram center where camera is pointing to. Sometimes, the robot attention system generates a random focus-point in order to explore all scene around itself.

2011.12.07. Parallelograms attention system[edit]

Parallelograms guides the attention system. We can see a blue cross on the floor, which symbolize where the camera is pointing. It has to match with the parallelogram center where camera is pointing to. Sometimes, the robot attention system generates a random focus-point in order to explore all scene around itself.

2011.12.05. Recognizing parallelograms[edit]

Now, system is able to hypothesize parallelograms. It just needs to get three points which are forming a parallelogram shape.

2011.11.25. Long term memory[edit]

In this experiment, the robot goes around the corridor of our building and it does a complete lap.

2011.11.23. Geometric model includes Pan & Tilt movements[edit]

Now we have included pan & tilt movements to the geometric model.

2011.11.22. Short term memory[edit]

Here we can see how our system is able to recognize segments from scene around robot.

2011.10.11. 3D segments reconstruction with Solis Algorithm[edit]

We recognize 2D segments on the 2D image with Solis Algorithm and then, they're back-projected over floor (ground hypothesis). We can see in this image result is plausible.

2011.10.06. New Visual Memory GUI[edit]

Here we can see the new GUI.

2011.09.12. Solis Algorithm[edit]

We can see the 2D segmentation algorithm based on a Solis' paper. It's work fine, much better than Canny+Hough Transform process.

2011.05.26. New GUI with several windows[edit]

On this video, we show the new functionality. Our schema includes three windows: main, opengl and navigation controller.

2011.05.18. New 3D segments reconstruction with JDE-5.0 implementation[edit]

We start to implement our visual memory under Jde-5.0 implementation. Now, our algorithm is inside an Ice component, codified in C++ language. We're using Gazebo 0.9 as a robotics simulator and a Pioneer 2DX as a robot platform, with two Sony PTZ simulated cameras.

In this first step, we try to recognize 2D segments on the 2D image and then, they're back-projected over floor (ground hypothesis). Our algorithm uses Canny as a border filter and Hough Transform as a segment filter. We can see in this image result is not plausible.

Visual Sonar (Canada Stay) & Follow Face (2009-2010)[edit]

2010.08.27. Corner detection using real images with and without Gabor Filters[edit]

As we're able to see on the last work, I'm using Gabor filters to detect borders in a specific orientation combination, and with a single scale. That way, when I use FAST algorithm to detect corners, I only get the corners under these specifications. The only reason to use FAST to detect corners is because is quite precise to do that. We can see this example:

a) Original image:

b) FAST cornered image (red marks):

My main propose is to show the difference between: - Use FAST over Gabor filtered image. - Use FAST over original image. On this experiment, we're using a real Firewire camera to get the images in real time. The results are the followings:

a) Original image:

b) Gabor filtered image (orientations: 0º x 90º):

c) FAST over Gabor filtered image:

d) FAST over original image:

The main difference between using or not Gabor Filters is awesome: c-example has only desirable corners, but d-example has too much noise.

2010.08.18. Corner detection using Gabor filters[edit]

As I've described on last entry, I'm using Gabor filters in order to detect borders. But I've realized I can not use the forty solutions for use that as a real-time solution. So, I've consider just a single scale (so we could say: the robot has myopia) and a pair of orientations in order to detect some corners and then we will able to detect basic shapes. Here we have the original image:

And here we can see the results when we're using a single scale and 6 orientation combinations (0-90, 45-135, 90-180, ...).

2010.08.12. Using Gabor Filters to detect basic shapes[edit]

I've implemented a Gabor filters system which I'm going to use to detect basic shapes (squares, triangles, circles). In image processing, a Gabor filter is a linear filter typically used for edge detection. Frequency and orientation representations of Gabor filter are similar to those of human visual system, and it has been found to be particularly appropriate for texture representation and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave. (Text extracted from Wikipedia.)

Basic Gabor filter form[edit]

(Maybe you've to zoom in the image)

So, I get forty filters like this one, but in different orientations and scales: 5 scales x 8 orientations (each of them). That way, I can get different border detection results.

One-scale results[edit]

We can get these results using just one scale and different orientations. We can appreciate the differences. (The first picture is the original image.)

2010.07.13. Log-polar images and movement analysis[edit]

In order to do a faster attention system algorithm, we're gonna work with log-polar images. At first, we have a covert attention system guided through the movement. That way, we're simulating the human eye mechanism: retina and fovea, and the basic human eye behavior: we pay attention where we see movement.

2010.03.26. Designed new GUI[edit]

Now, we've a new GUI more useful and easy to see whatever you want to get. Here, I show a screenshot.

2010.03.23. Pioneer following arrows from one side to the opposite[edit]

On this video, we show the Pioneer behavior with our attention system. It has to follow arrows located above Robocup football field, with a movement similar to famous Ping-Pong game.

2010.03.23. Space recognition with a spiral movement, with a big threshold[edit]

Here, we do the same experiment as the previous one, but now we want to produce continuous lines. So we've had to increase the parallelism threshold in order to fusion themselves. But the main problem is the noise we get in this case.

2010.03.12. Space recognition with a spiral movement[edit]

On this video we see the robot spiral movement through the Robocup football court. How we can expect, it has odometry errors in its estimations. Anyway, the memory is coherent around the robot, correcting previous and erroneous estimations.

2010.03.10. Security window[edit]

Security Window has been incorporated to the robot. We can see the 3D representation.

2010.02.26. Consistent 3D memory[edit]

Now we can see how robot 3D memory is consistent with the real environment.

2010.02.25. Pioneer around lab, recognizing environment[edit]

Here we can see representation inside robot 3D-memory, while it has been moving a few minutes. It an amazing image because odometry errors aren't significant. Segments estimation and real segments fit properly.

2010.02.17. Pioneer around lab following arrows[edit]

In these videos we can see how our robot go around lab avoiding obstacles, following arrows and detecting faces. The attention system is well build, so robot only pay attention those things it considers important for its navigation. While robot is moving, it goes recovering information around environment and doing its memory. In the first video, we see Pioneer robot from a exterior camera. In the second one, we see the robot memory, from the on board camera.

2010.02.04. Algorithm following arrows[edit]

In this video, our algorithm is detecting all objects around robot. Now, arrows let robot knows where is the goal. When there are many arrows around the robot, only the nearest arrow is the main influence over robot navigation.

2010.02.01. Algorithm detect and pay attention to arrows[edit]

Now, algorithm's able to detect faces, parallelograms and arrows, with too much noise. That way, our robot will able to navigate following these landmarks.

2009.12.16. Algorithm detect and pay attention to faces and parallelograms[edit]

In this interesting video, we can see how our system reconstruct and follow detected faces and parallelograms. Now, we've only a set of attention system elements (parallelograms and faces) with saliency and liveliness dynamic. Both of them has a centroid point used to focus image towards them.

2009.11.26. Running with noise[edit]

Here, we can see the application behavior with too much noise. We've tried the parallelogram recognition with a very low line detection Hough Transform threshold, in order to show the robustness of our algorithm.

2009.11.25. Parallelograms reconstruction[edit]

We introduce the hypothesize concept in order to detect partial-seen parallelograms and reconstruct them. Here we can see lots of parallelograms over the floor and how the robot is able to detect and reconstruct some of them.

2009.11.24. Corridor clearer reconstruction[edit]

Now, we can see a clearer corridor reconstruction. We've only a single line with each side of the corridor. The horizontal lines belong with corridor doors.

2009.11.20. Corridor corner reconstruction[edit]

One more step. Today, Pioneer goes through the corridor with free rotational movement in order to reconstruct when it's turning around corner.

2009.11.19. Corridor reconstruction[edit]

Using last development, Pioneer is able to reconstruct the department corridor. The main difficult is in the reflections over floor, so we've modulated correctly many filter parameters in order to avoid them. We can see the result in the next two images.

2009.11.11. Whole floor reconstruction using single segments[edit]

Here we've improved the merge function in order to get the longest segment for every direction in the world. And robot's able to get the whole floor 3D reconstruction using single lines.

2009.11.10. First segment merge implementation[edit]

At this point, we've converted our world to segments. Furthermore, we want to merge and overlap repeated lines. The first implementation is a good step because we have: - Single and not repeated segments in the world. - We check parallel segments and we keep the segment memory correctly.

The result is the next. Now we want to improve the merge function in order to get the longest segment for every direction in the world.

2009.11.02. Floor 3D reconstruction using lines[edit]

Before we've been using points to represent the floor lines. Now, In order to decrease the time to process image, we want to work with lines. So we've segmented the image detected borders and we obtain a set of lines drawn by the OpenGL instruction GL_LINES.

2009.10.29. Floor 3D reconstruction with robot Nao[edit]

We have tested our algorithm with a humanoid robot Nao, as we can watch in the next video:

2009.10.28. Complete floor 3D reconstruction[edit]

Using the last development, I've added robot Pioneer movement in order to reconstruct the whole floor.

On the next videos we can see:

- 1 The distance covered by the robot.

- 2 The 3D floor reconstruction on its virtual 3D-memory.

2009.10.28. Camera extrinsic parameters manual adjustments[edit]

Because of last not-perfect results, I've added new sliders on the frontera GUI; that way, I've manually adjusted camera extrinsic parameters and now the result is perfect.

We can see the correlations between the three points of view, each of them is painted with a different color.

2009.10.22. Step by step camera extrinsics and intrinsics measurements[edit]

As I said last time, we'd several problems with pantilt oscillations. Furthermore, deep estimations weren't very precise. So, we decided to extract camera extrinsics and intrinsics parameters, step by step. 1) Using extrinsics schema, and knowing the camera absolute position in the world, we determined the correct camera intrinsics parameters (u0, v0 and roll). At this moment, we realized about progeo coordinate system more info. Definitely, we use a different one. This is the result:

2) After that, we put the camera above the pantilt device, we measured its position again, and this is the result.

3) The third step was to use mathematical model, but only with a single RT matrix. We'd to correct pantilt position, the optical center and the tilt angle given by the pantilt encoders... we did that again and again until we got this result.

4) When we knew every parameter (PANTILT_BASE_HEIGHT, ISIGHT_OPTICAL_CENTER, TILT_HEIGHT, CAMERA_TILT_HEIGHT, PANTILT_BASE_X, PANTILT_BASE_Y) we continued with the rest of RT matrix's until we'd the whole system. This is the result.

5) Finally, in order to check the correct mathematical model, we moved pantilt with saccadic movements and we're able to check the pantilt oscillations problem that I told last time. We got these sequences.

2009.10.09. Problem: pantilt oscillations[edit]

We've realized that pantilt movements are not uniform. Each iteration, the pantilt is not on the same position as last iteration. On this figure, we can see the oscillations on pan axis (blue line) when it's moving forward left and right sides; the final adopted positions are different. On the other hand, the tilt axis (red line) isn't moving, so the position is always the same. The values are expressed on radians.

2009.10.08. 3D Floor reconstruction with camera autocalibration[edit]

Now, we've introduce the RT matrix concept in order to calculate relative positions. So, we can know the camera position in the world and its focus of attention (foa). We have the following RT-matrices:

- Robot position relative to world coordinates (translation on X & Y axis and rotation around Z axis)

- Pantilt base position relative to robot position (translation on Z axis)

- Tilt height position relative to pantilt base (translation on Z axis and rotation around Z axis)

- Tilt axis relative to tilt height (rotation around Y axis)

- Camera optical center (translation on X & Z axis)

- Focus of attention relative to camera position (translations on X axis)

Because of we don't know specifically where is the optical center on the Isight camera (about 100mm long size), I've test several positions in order to get the best match between real and virtual coordinates. The following images corresponds to different optical centers: -10, -20, -30, -40, -50, -60, -70, -80 and -90 mm from image plane until the bottom of the physical camera. Finally, we can conclude the best optical center estimation is in the -20 mm position.

-10 mm[edit]

-20 mm[edit]

-30 mm[edit]

-40 mm[edit]

-50 mm[edit]

-60 mm[edit]

-70 mm[edit]

-80 mm[edit]

-90 mm[edit]

2009.10.02. Star Trek National Convention. Talk about Robotics[edit]

On the occasion of the Star Trek National Convention, celebrated in Fuenlabrada (Madrid), I've given a talk about the most current real robots and their main components. Nowadays we can see lots of robots whose applications are very diverse.

The slides I've used can be found here.

2009.10.02. Systematic floor reconstruction[edit]

Here, we can see the three-views floor reconstruction. In this case, we've established three marks manually corresponding to the three different focus of attention (foa). That way, we can recalibrate camera for this three positions.

2009.09.14. Systematic search[edit]

On this video, we've tested the systematic search around scene, in order to guarantee system will explore all scene around it. Thus, we'll search faces using random search with systematic search. Now, we're sure that any face will be out of range.

2009.09.09. Following faces around scene, with saliency and liveliness dynamics[edit]

Here, we can see a visual attention mechanism. Now, our algorithm chooses the next fixation point in order to track several objects around the robot simultaneously. This behavior is based on two related measurements, liveliness and saliency. The attention is shared among detected faces and new exploration points, when forced time to explore scene is out. Moreover, this time is depends on how many faces are detected. If we've several detected faces, this time will be large...

2009.09.03. Following faces around scene[edit]

Now, as we told last time, we have a continuous space in order to gaze the pan-tilt unit according to the major saliency object. Sometimes, we'll have to introduce some virtual faces to explore new zones... And when we find a face, we stop there watching it. Next step is instead of stopping, following that face...

2009.08.31. Following multiple faces from different scene perspectives[edit]

Here, you're the last version of this "intelligent followface". We've decided to change our point of view and now instead of having three parts on the scene, we'll have a continuous space in order to gaze the pan-tilt unit according to the major saliency object. Sometimes, we'll have to introduce some virtual faces to explore new zones...

2009.08.25. Following multiple faces[edit]

2009.08.19. Following one face[edit]

Navigation algorithms & visual frontier hypothesis (2008-2009)[edit]

2009.07.17. Improving local navigation algorithms[edit]

We're trying three algorithms to solve the problem of local navigation:

VFF[edit]

This model is to create a Virtual Force Field with forces that represent: the objets, the destiny and a force that is the resultant of both multiplied with two modulation's parameters.

In our implementation we defined the atraction force (this force represent the destiny) with constant module, and the repulsive force (wich represent the object) with a variable module grow when the robot approaching an object. We get the resultant force solving this equation:

Fresult = a*Fatrac+b*Frepuls

In the equation, a and b are the modulation's parameters and we give their values ad-hoc to make a realistic force field. The image is an example of a virtual force field:

(Server down, not available)

We also have implement a security window to make this algorithm more secure consist in calculate if there is a free zone in front of the robot and if exists we can move the robot with maximum speed. To improve the movements, we use fuzzy logic to do it more fluid. This two improvements give us betters movements.

This video shows this algorithm in progress:

(Server down, not available)

Deliberative[edit]

In this type of navigation the robot walk over a line. This line is defined by two points of the way and the robot must be in this line by all the way, if the robot is out of the line it will returns to the line before move.

The next video is the deliberative algorithm running:

(Server down, not available) so

Hybrid[edit]

In the hybrid method we use the implementation of VFF and with destiny a point of the deliberative's way. With this we defined a virtual force field arround all the way and the robot moves by the way avoiding the objects.

In the hybrid navigation as in VFF, we also use fuzzy logic to get fluids movements in the robot.

In the video you can see a simulated pioneer robot running in a racing circuit called Cheste with the hybrid navigation algorithm:

(Server down, not available)

VFF algorithm in a real robot[edit]

Improving VFF algorithm[edit]

In our VFF implementation we use a security window which allows the robot go through narrow places and other danger situations. This security window also erases the zig-zag behavior that appears in VFF algorithm because when the robot detect a wall with this window the robot goes parallel to the wall. But sometimes when the robot is following the wall it exceeds the target and continues, as you can see in the next video about 1:25 and 2:10 minutes.

(Server down, not available)

To improve this behavior we have added one condition to the algorithm and now when the robot is close to the target, the robot forgets the wall and using VFF goes to the target. You can see the different in the next video about 2:15 minutes.

(Server down, not available)

2009.06.16. Floor 3D recognition using monocular vision from robot camera[edit]

2009.06.02. Virtual Reality Master Project: how to enhance virtual reality with 3D sound[edit]

Three-dimensional sound has been neglected in most VR and AR applications, even though it can significantly enhance their realism and immersion.

All developments about this hypothesis have been highly explained on this final report.

2009.05.20. Pioneer's running between two lines, like a road[edit]

Using only visual information, the Pioneer robot can detect border lines over floor and it goes through them. Its behaviour is based on vff algorithm and we've added some ideas from Akihisa Ohya paper called "Vision-Based Navigation of Mobile robot with Obstacle Avoidance by Single Camera Vision and Ultrasonic Sensing"; the actual image is divided into three vertical segments (left, center and right) and then we calculate the total number of pixels in each of the three parts, determining the direction of safe passage. That way the robot movement is softer than using only vff algorithm.

2009.05.10. JDE Seminar: how to compile, link and debug our C applications on Linux[edit]

Sometimes there are many problems about how to compile our applications with gcc (GNU C Compiler) or link them with dynamic libraries and how to check the correct linked process. Furthermore, when we have perfectly built our application or executable, we want to know why our application doesn't work fine, in whose cases we need to debug it.

I explained how to solve these problems on this talk, which slides I used can be found here.

2009.05.05. Skinning, muscles, skeleton, clothes, and dynamics techniques modeled under Maya[edit]

On these videos, we can see a little animation movie. I've used skeletons, muscles, clothes and several dynamics techniques in order to create a realistic animation (the characters are made of deformable parts).

Skinning is the name given to any technique that deforms the skin of a character. By extension, the term skinning is commonly used to describe subspace deformations (static or skeleton driven).

And what are skeletons? Skeletons are hierarchical, articulated structures that let you pose and animate bound models. A skeleton provides a deformable model with the same underlying structure as the human skeleton gives the human body.

All developments have been highly developed on this final report.

2009.04.28. VFF over visual information[edit]

Here we're the first implementation of VFF algorithm based on the visual information. We can navigate perfectly using only the camera as sensor. On this video, the goal is always 2 meter in front of the robot; so it pretends to go straight ahead but obstacles block it...

2009.04.12. San Teleco Talk about Robotics[edit]

Because of the week of Telecommunications Department, called San Teleco, I've given a talk about Robotics at University Rey Juan Carlos, where I've described the Robotics in general, and how we work on the University Robotics Group; a brief overview about hardware, software, different projects, techniques, etc.

The slides used on talk can be found here

2009.04.02. Instantaneous GPP calculation[edit]

Here, after several days, we've solved the application launch memory problem. Now we get world information by other way... Furthermore, we've uncoupled the gradient calculation and the schema iteration cycle, so now we can calculate it so quickly.

2009.03.30. GlobalNavigation schema draws planned route[edit]

Now, when we have just finished to calculate the optimized route between origin and destination, first we draw it on yellow color and then the robot follows it drawing its path on pink color.

2009.03.29. GlobalNavigation schema with Gazebo simulator[edit]

We've been trying to simulate our world with Gazebo. Solved some problems, e.g. initial_position parameter, the simulator works fine and our schema runs as before.

2009.03.28. First stable GlobalNavigation schema[edit]

2009.03.27. Border points detected with OpenCV[edit]

In order to increase the visual information, we've decided to find image contours, using OpenCV too.

2009.03.27. Frontier points detected with OpenCV[edit]

As we told before, now we can detect first border points using OpenCV (without any visual noise).

2009.03.24. Frontera schema using OpenCV[edit]

Because of several problems with our proper filter, we've decided to use OpenCV library with Canny Filter. Now the algorithm works fine with lighting changes.

2009.03.20. Frontera schema testing[edit]

We've been testing several real examples called "Cartabon test". And now we can conclude when objects are too far, our application give us wrong estimated distances.

2009.03.20. Frontera schema includes the virtual Pioneer[edit]

2009.03.10. Frontera schema under GTK[edit]

We've developed a new 'frontera' schema interface in order to see the robotics lab scene completely. Now, we've different cameras: 4 lab corner cameras, 1 onto the lab ceiling, and the user camera.

2009.02.24. Frontera schema under GTK[edit]

Here we can see the new frontera shema, using GTK. That way we've solved LibXCB problem because GTK has a best multithread control.

2009.02.17. Frontier hypothesis for floor 3D recognition[edit]

2009.02.10. Color filtering for frontier hypothesis, from lab ceiling cameras[edit]

2009.02.03. Floor 3D recognition using monocular vision from robot camera[edit]

2009.01.23. Perlin Noise[edit]

Here, I've created a Perlin Noise animation. I usually use Perlin Noise as a procedural texture primitive. That way, I can create effects like this and it's used to increase the appearance of realism in some computer graphics techniques.

For example, on the second image, I've created a virtual landscape using basically Perlin Noise.

2009.01.15. Modeled and animated human skeleton[edit]

Here we're a human skeleton laughing. It has been modeled and animated by Maya Software.

2009.01.12. Wandering[edit]

With this behavior we want to see the robot moving towards random targets, avoiding all objects that can be in the environment. The wander schema only gives random targets to the local navigation schema through a shared variable called "target". Those targets are calculated with a random C function and are between (0,0) that means the robot position and a max perimeter called "radio".

Also this schema has got a counter. If the target hasn't been reached in "maxtime" seconds, the schema calculates a new target. That way, with this schema the robot is always moving.

Next you have two examples of the wander schema running in gazebo simulator and running in a real robot.

(Server down, not available)

The local navigation behavior is quite simple to understand. We use VFF algorithm, already explained. But we've incorporated a new concept called "security window" (you can see it on the next figure). With this device, we can solve some natural situations qualified as "narrow places" (e.g. doors, corridors, ...).

Now, I'm going to explain how it works. Under the followings conditions:

 a) There are something over left side, or right side, or both.
 b) There are not anything over front of robot position.

...robot can run straight ahead quickly, but null angular velocity! This functionality is perfectly showed on the previous video, when cleaner-woman goes near to the robot, and it only can run with linear speed. Or when robot goes into WC, and it goes out there.

2009.01.08. Classic pong modeled and animated under Maya[edit]

Here we've created a bouncing ball and two strikes. Then, we've animated them; the first part is based on key frame animation, so when we want to describe some movement, we mark the extremes frames as key frames and we design the movement for strikes and ball...

2008.12.16. FollowPerson[edit]

This behavior is another application for local navigation, the robot try to follow a person based in his shirt color.

In this first video the robot can follow the person well because there isn't any obstacles to avoid.

(Server down, not available)

In this second video the robot avoids another person, using localNavigation behavior

(Server down, not available)

2008.10.12. Visual map of the lab ceiling for localization method[edit]

2008.10.03. Localization using MonteCarlo Method[edit]

2008.06. Gran Holder by ME Collaboration Project: Computer Vision[edit]

Along this course I've been developing a project according to which I've tried to simulate a laser sensor with a common webcam. This method is typically called 'visualsonar'. Now, the robot is able to detect surrounded obstacles. The final robot behaviour is as following:

- It's needed to separate floor colour and coloured objects.

- Then, we can obtain an image composed by borders.

- The only interesting border is the bottom border, which means the frontier between floor and obstacles.

- Using computational geometry algorithms, we can estimate distances between robot position and obstacles.

All developments have been highly detailed on this final report.

2008.03.01. Modeling raytracing[edit]

What's Raytracing? It's a method that allows you to create photo-realistic images on a computer.

It attempts to trace the paths of light that contribute to each pixel that make up a scene. Instead of computing visible surfaces, determine intensity contributions. Furthermore, it compute global ilumination.

In our example, the first image show a single ball iluminate with a single light. Here we've calculate only local ilumination and shadows. The second image is the same example, but getting several samples per pixel; so the result is a more clear image.

Now, we've two balls and different lights. So we've to calculate global ilumination and other natural effects like reflection or refraction. The second image is the same example, but getting several samples per pixel; so the result is a more clear image.