- Julio Vega, (PhD): Vision based Robotics and Robotics Teaching to pre-university students
- Mario Fernandez Guerrero, (grad): web technologies.
- Andrés Hernández, (grad): Visual landing of a drone
- Diego Jiménez, (grad): 3DR solo drone support and behavior
- Nacho Condés, (grad): TensorFlow
- Irene Lope, (grad): new exercises in JdeRobot-Academy
- Carlos Awadallah, (grad): JdeRobot-Academy
- Jesús Saiz, (grad): drones
- Jorge Vela, (grad) Drone precise landing
- Vanessa Fernández, (grad) new exercises in JdeRobot-Academy
- Marcos Pieras, (master): DeepLearning for object detection and tracking
- Manuel Zafra, (grad): fine 3D path following of a drone
- Javier Benito, (grad): visual odometry in 3D with RGBD sensor
- David Pascual, (grad, co-advised with Inmaculada Mora): deeplearning on RGBD sensors
- Nuria Oyaga, (grad, co-advised with Inmaculada Mora): deeplearning on RGBD sensors
- Walter R. Cuenca, (grad): teaching web technologies
- Álvaro Villamil, (grad): robotic arm exercise in JdeRobot-Academy.
- Eduardo Perdices, (PhD): Efficient visual 3D localization on real time, SDVL
- Alberto Martín, (master, co-advised with Francisco Rivas): SLAM, 3D visualSLAM over RTABmap
- Carlos Rodríguez, (grad), web technologies
- Nacho Arranz, (grad), web technologies
- Alberto Pavo, (grad): JdeRobot applications to and from Youtube
- Jose Antonio Fernández, (undergrad): UAVs
- Arturo Vélez, (grad): Drone visually pursuing a textured object.
- Redouane Kachach, (PhD): Traffic Monitoring using vision
- Víctor Arribas, (master): benchmark for visualSLAM algorithms, including international datasets
- Jorge Cano, (grad): Building of an UAV: from the hardware to the driver and autonomous applications
- Iván Rodriguez, (grad): webrtc from a drone
- Samuel Martín, (grad): 3D Maps of flat patches in real environments from RGB-D data.
- Víctor Nieto, (grad): Teaching Robotics: global navigation
- Samuel Rey, (grad): improving the visualHFSM tool for robot programming
- Rebeca Sáez, (grad): people counter with RGBD sensor
- Alberto López-Cerón, (master):3D visual localization over a drone using markers
- Iñaki San Román , (master): Visual Odometry
- Daniel Azuara, (grad): visualSLAM, OGRE, bullet, Augmented Reality applications
- Javier Fernandez, (grad): home automation
- Yazmin Cumberbirch, (grad): visualSLAM, Augmented Reality applications
- Daniel Gómez, (undergrad): Visual tracking of traffic vehicles
- Juan Navarro, (grad): extracting 3D plane patches from RGBD sensor data.
- Evelin Fuster, (grad): Android application for Alzheimer patients therapy.
- Miguel Ángel Tomé & José María Esteban, (grad): Web system for power consumption data
- Francisco Pérez, (grad): Nao walking in Gazebo5 and JdeRobot.
- José Manuel Villarán, (grad): AutoRob, marker based visual localization for autonomous robots.
- Daniel Yagüe, (grad): ArDrone quadcopter in Gazebo
- Álvaro Sánchez, (grad): automatic bank reports generation
- Livio Calvo, (undergrad): Unmanned Aerial Vehicle
- Luis Menendez, (master): Tracking objects in cluttered environments
- Laura Martín, (collaboration) Teaching robotics
- Edgar Barrero, (grad, co-advised with José Centeno) Web interface for Surveillance project. Ruby on Rails and JdeRobot
- Daniel Martín, (grad): RGBD SLAM techniques
- Luis Roberto Morales, (grad): Self-localization multimodal evolutive algorithm for autonomous robots
- Francisco Rivas, (master and grad): ElderCare-5, 3DPeopleTracker
- Eloy Montero , (master): Analysis of algorithms for visual object tracking
- Alberto Martín, (grad): UAV Ar.drone visual control, Air project
- Alejandro Hernández, (master): Visual localization for Augmented Reality
- Borja Menéndez , (master): visualHFSM for Nao
- Daniel Castellano, (grad): home automation system
- Humberto Urrutia, (master): Teaching LEGO NXT programming
- Juanjo García Cantero, (grad): calibratorRGBD component
- Agustín Gallardo, (undergrad): Visual Calibrator
- Maikel Gonzalez, (undergrad): jderobot-5.1 release
- Alejandro Hernández, (grad): Local Memory of a mobile robot based on depth sensors
- Rubén Salamanqués, (undergrad): Visual tool for robot programming with Hierarchical Finite State Machines
- Pablo Miangolarra, (grad): Security system for the management of heterogeneous sensors in critical infraestructures
- David Yunta, (undergrad): Tool for visual programming of robot behaviors with Finite State Machines.
- Rubén González, (master): Three-dimensional Reconstruction of Indoor Environments for Visual Navigation in Robots
- Francisco Rivas, (undergrad): Learning walking gaits for Nao humanoid
- Darío Rodríguez, (undergrad): Multimodal evolutive algorithm for self localization using vision and laser
- Javier Vázquez, (master): Teaching robotics with jderobot-5.
- Roberto Calvo, (master): Enabling Technologies: Libre Software and small computers.
- David Lobato, (master): jderobot 5: Component-based framework for robot aplications.
- Redouane Kachach, (master): Car Classifier.
- Eduardo Perdices, (master): Visual Self-Localization in the RoboCup with sampling-based algorithms.
- Sara Marugán, (master): Autonomous fall detection system that sends alarms to a cell phone.
- Jorge Bermejo, (undergrad): Nao humanoid support in Gazebo 3D robot simulator.
- Carlos Iván Martín, (undergrad): Visual Editor for Finite State Machines in jderobot.
- Luis Miguel López, (grad): Location & Environment reconstruction from one moving camera.
- Sara Marugán, (grad): People 3D tracking using volumetric primitives.
- Pablo Miangolarra, (undergrad): Visual memory 3D for mobile robot with stereo couple.
- Carlos Agüero, (PhD, co-advised with Vicente Matellán): multi-robot object localization, task allocation, and multi-target object localization.
- José Manuel Domínguez, (grad): Localization world 3D.
- Eduardo Perdices, (grad): Visual Self-Localization in the RoboCup based in 3D goals detection.
- Gonzalo Abella, (undergrad): Visual attention.
- David Muelas, (undergrad): 3D Skeleton Visualizer.
- Francisco Martin, (PhD, co-advised with Vicente Matellán): several self localization algorithms oriented at RoboCup environments, where the odometry of the legged robots is unreliable and the robots only have vision as their main sensor.
- Pablo Barrera, (PhD, co-advised with Vicente Matellán): a particle filter to visually track several objects in 3D. It combines Importance Sampling (using abduction from the images) and Condensation in a truly multimodal MonteCarlo particle filter.
- Roberto Calvo, (grad): a visual application that builds and shows the 3D representation of the scene surrounding a robot, using a stereo camera pair. It uses an attention mechanism to guide the robot gaze through the relevant objects in front of the robot.
- Julio Vega, (grad): three components for a guide robot: (1) a global navigation component, which is based on the GPP algorithm; (2) a local navigation component, which is based on the VFF algorithm; and a (3) self localization component.
- Víctor Hidalgo, (grad): an application, named CarSpeed, that detects the speed of the vehicles travelling through a regular road. It uses a simple uncalibrated camera.
- Redouane Kachach, (grad): Automatic camera calibration using DLT. It automatically searches in the image for relevant points in the 3D pattern and finds the intrinsic and extrinsic parameters of the camera. It is easier to use than ARToolkit.
- Javier Martín, (undergrad): Face detection and tracking on jdec platform, several JDE schemas that detect faces and track them in image flows from videos, from real cameras, etc.
- Manuel Mendoza (undergrad, co-advised with Pablo Barrera): the classic matching and triangulation method to estimate a 3D scene from a stereo pair. He tested it with real cameras, not only simulated ones.
- Sara Marugán, (undergrad): has developed an application that learns the color of several people walking inside a room and tracks their 3D position in real time using the images from four cameras.
- José A. Santos (undergrad) has developed a schema that computes the optical flow in a image sequence (from a real camera or a video) and two applications that use it: a visual car counter and an EyeToy-type teleoperator for our robots.
- Angel Cortés (undergrad) has developed an evolutionary multimodal localization algorithm. It evolves several races of individuals at the same time to keep several hypothesis. It robustly localizes the robot using laser sensor in simmetric environments.
- Iván García, (undergrad): Visual 3D reconstruction with evolutive algorithms. He has developed a visual multiobject 3D tracker. It is based on genetic algorithms both with points and 3D segments as individuals.
- Jesús Ruíz-Ayúcar (undergrad) has developed jdeneoc a new software implementation of JDE cognitive architecture for robotic applications. Schemas are integrated as plugins in jdeneoc. It uses OpenGL for 3D visualization, GTK and Glib.
- José M. Esteban (undergrad) has developed a visual 3D perception algorithm which uses matching, triangulation and 3D segmentation.
- Olmo León (grad, coadvised with Pablo Barrera). He programmed a vision-based application which attentively builds a 3d representation of the scene and uses that for autonomous navigation.
- Ricardo Palacios (undergrad). He developed several schemas for mate, people, door and wall robust identification using ethological sensor fusion tecnhiques.
- Víctor Hidalgo (undergrad). He developed a vision-based mate detection and following for the Pioneer robot.
- Antonio Pineda (undergrad, coadvised with Pablo Barrera). He developed a vision-based security application using particle filter and fly filter to localize and track a relevant object in 3D.
- David Lobato, (grad): jde+: an object oriented implementation of JDE. He designed and programmed JDE+, a new object-oriented implementation of JDE architecture for robotics applications.
- Marta Martínez de la Casa (grad). She studied attention mechanisms and developed a new one for a single camera on a pantilt unit to represent the whole scene using saccadic movements, and so achieving a broader scope than the camera's field of view.
- Alberto López (undergrad). He programmed a localization algorithm for the Pioneer robot based on a visual beacons 2D-map and camera images.
- Redouane Kachach (undergrad). He programmed a localization algorithm for the Pioneer robot based on 2D-map and laser measurements.
- Pedro M. Díaz (undergrad). He developed a vision-based wander behavior for the Pioneer robot.
- Carlos Agüero and Víctor Gómez (grad). They have worked on the Speed Intelligent System, an autonomous system to detect overtaking situations for heavy trucks and buses.
- Raúl Isado (undergrad). He has worked on the gradient path planning algorithm (from Kurt Konolige), implementing it as a JDE schema.
- Alejandro López (undergrad). He has programmed a robot path planner using visibility graphs.
- Roberto Calvo (undergrad). In his bachelor thesis he has developed a follow person behavior for a Pioneer robot, using on board camera and pantilt unit.
- Ricardo Ortiz (undergrad). He has developed a follow person behavior for a Pioneer robot, using only the on board camera and sonars.
- David Lobato (undergrad). He implemented a dynamic window control method (from Sebastian Thrun) for local navigation on a Pioneer robot.
- Maria Angeles Crespo (undergrad). She programmed a MonteCarlo localization algorithm using only visual information, and built a realistic simulation environment to do the experiments (pdf).
- Juan José Martínez (undergrad, co-advised with Vicente Matellán). He programmed the behavior set of a soccer player in the simulated RoboCup, the forward, using the JDE architecture.
- Marta Martínez (undergrad). She programmed a follow ball behavior for the EyeBot robot using the birdeye camera.
- Alfonso Matute (undergrad). He implemented a calibration tool for color filtering: in RGB, in HSI, in Lab.
- Victor Gómez (undergrad). In his bachelor thesis he developed a reactive wall following behavior using only vision information, on the EyeBot robot.
- Félix San Martín (undergrad). He programmed a follow ball behavior for the EyeBot robot using the local camera.
- Esther García (undergrad). She implemented a teleoperator for the EyeBot robot, and wrote our first EyeBot manual.