From jderobot
Jump to: navigation, search

How to contribute

The JdeRobot project is open to contributions at development, documentation, testing, integration and research. Just see the great manual step-by-step (thanks Lihang Li): It is recommended to subscribe to the JdeRobot mailing list (follow these directions). All the interaction with developers and users team are through that mailing list.

The entry point to collaborate with the project is:

How to create a proper Pull-Request is well detailed here

If you deserve to be in the Hall of Fame of People section, drop me a line (jmplaza AT gsyc DOT es).

Good Practices

Then go to appose a series of good practices for every new contribution (If you need help to fulfill them, ask on the mailling list).

  • All library or tool must have cmakelists
  • All library or tool must be properly annotated using format doxygen for c ++ and pydoc for python. This includes classes and functions, also provided that will make operations with numbers without relationship apparent must be explained.
  • In the case of code c ++, must be separate in include hand and src by another, being both well sorted (see jderobotcomm_cpp as example).
  • All tool should use the library easyiceconfig to read the file of configuration
  • All tool should use jderobotcomm for communications
  • All the GUI should use QT5, are leaving from using GTK

Ideas List (for Google Summer of Code 2017)

JdeRobot is a software development suite for robotics, home-automation and computer vision applications. These domains include sensors (for instance, cameras), actuators, and intelligent software in between. It has been designed to help in programming such intelligent software. It is written in C++ language and provides a distributed component-based programming environment where the application program is made up of a collection of several concurrent asynchronous components. Each component may run in different computers and they are connected using ICE communication middleware. Components may be written in C++, Python, Java... and all of them interoperate through explicit ICE interfaces.

JdeRobot simplifies the access to hardware devices from the control program. Getting sensor measurements is as simple as calling a local function, and ordering motor commands as easy as calling another local function. The platform attaches those calls to the remote invocation on the components connected to the sensor or the actuator devices. They can be connected to real sensors and actuators or simulated ones, both locally or remotely using the network. Those functions build the API for the Hardware Abstraction Layer. The robotic application get the sensor readings and order the actuator commands using it to unfold its behavior. Several driver components have been developed to support different physical sensors, actuators and simulators. The drivers are used as components installed at will depending on your configuration. They are included in the official release.

This open source project needs further development in these topics:

Project #1: TeachingRobotics: a Formula-1 race in Gazebo-7

Brief Explanation: Teaching Robotics is an academic framework for teaching robotics and computer vision. It includes several exercises where each one includes a Python application that connects to the real or simulated robot and provides a template that the students have to fill with their code for the robot behavior. There are two exercises with a Formula-1 car in Gazebo-7: one about the car following a line and another about the car avoiding obstacles while the race circuit. This project aims to develop a new exercise (and solution) about two cars racing in the circuit, where there are also several dummy cars. The final student will have access to the laser sensor, the onboard camera on the car, its throttle and steering wheel.

Expected results: a new robotics exercise in Gazebo-7.

Knowledge Prerequisite: Python programming skills

Mentors: Jose María Cañas (jmplaza AT

Project #2: TeachingRobotics: Localization exercise from a given map

Brief Explanation: Teaching Robotics is an academic framework for teaching robotics and computer vision. It includes several exercises where each one includes a Python application that connects to the real or simulated robot and provides a template that the students have to fill with their code for the robot algorithms. The goal of this project is to develop a new exercise (and solution) about localization of an autonomous wheeled robot which moves along a known scenario in Gazebo7. The robot has laser sensor and RGBD sensor. MonteCarlo localization algorithm will be the main approach. The application will include visualization of particles and step by step execution.

Expected results: a new robotics exercise in Gazebo-7.

Knowledge Prerequisite: Python programming skills

Mentors: Jose María Cañas (jmplaza AT

Project #3: Reinforcement Learning in JdeRobot: OpenAI Gym and Gazebo

Brief Explanation: The main idea of this project is the development of an extension for OpenAI gym to introduce the Reinforcement Learning in JdeRobot framework with the Gazebo simulator. Once the extension has been developed the second part of the project consists in learning with such OpenAI gym extension a navigation behavior for a simulated Turtlebot robot (or drone or autonomous car).

Expected results: an extension for OpenAI gym for robotics using JdeRobot and Gazebo and the implementation of a behaviour using Deep Convolutional Q-Learning.

Knowledge Prerequisite: python programming, computer vision, machine learning, gazebo, robotics

Mentors: Alberto Martín (almartinflorido AT

Project #4: Automatic generation of ROS node with an automaton using VisualHFSM tool

Brief Explanation: VisualHFSM is a visual programming tool which uses hierarchical finite state machines for automatically generating code. This functionality makes easier the programming of complex behaviors for different types of robots such as the Kobuki or the ArDrone. This tool automatically creates a C++ JdeRobot component or a Python JdeRobot component that implements the Hierachical FSM, which has been designed visually. Such component graphically shows the current state of the HFSM in runtime, which is very convenient to debug the automata. A more detailed description can be found here, different examples and applications made with it can be found here.

Although VisualHFSM is a mature and ready to use, there is still place for improvements. This proposal seeks to add modifications to this tool for achieving a more powerful version which provides more functionalities and flexibility. The main goal of this project is to generate a ROS application which implements the automaton and may connect to other ROS driver nodes. Other proposed modifications are: refactor visualHFSM GUI from GTK library to Qt5, and adding new features for improving the real-time debugging process.

Expected results: a new release of visualHFSM tool.

Knowledge Prerequisite: C++ programming skills, ROS robotic middleware

Mentors: Samuel Rey (samuel.rey.escudero AT

Project #5: Translation of a Scratch program to ROS python component

Brief Explanation: Scratch is a free visual programming language used in children education, to teach them basic programming. It is composed of graphical building blocks (instructions) that must be visually connected to build the program. The idea here is to take benefit of our experience translating from a graphical description of a program in visualHFSM into a JdeRobot-ROS component to do the same with a different visual language, Scratch. The goal is to explore the use of Scratch with robots, both simulated or real, that JdeRobot-ROS allows and simplifies. We will start with simulated robots in Gazebo, despite with real robots will be the same as they use the same interfaces.

Expected results: new tool prototype that reads Scratch programs and translate them into ROS Python components.

Knowledge Prerequisite: Python programming skills, ROS robotic middleware

Mentors: Jose María Cañas (jmplaza AT, L.Roberto Morales (lr.morales.iglesias AT

Project #6: Improved video support and transmission in JdeRobot

Brief Explanation: In JdeRobot we use the Cameraserver component and ROS node for USB cameras to provide images to applications, frame by frame. Currently video files, webcams, v4linux cameras and firewire cameras are supported. Cameraserver uses OpenCV. We would like to extend the supported image sources (GoPro cameras, web videos like YouTube..). In addition we would like to improve the frame rate between image servers and JdeRobot applications, like Cameraview or cameraviewJS from a web browser. Currently image transmission can be with or without compression. The compressed software needs to be improved.

Expected results: improved cameraserver component

Knowledge Prerequisite: C++, Python and JavaScript programming skills,

Mentors: Aitor Martínez (aitor.martinez.fernandez AT

Project #7: 3D Self localization (visualSLAM) component with RGBD sensor

Brief Explanation: RBGD sensors such as Kinect1 and Kinect2 from Microsoft, Xtion Pro Live from Asus and Astra from Orbbec are very useful in robotics as cheap 3D sensors, for obstacle detection, for map building, self localization, robot navigation, etc. Several visualSLAM techniques are available in the scientific community to solve the 3D Self Localization of a RGBD sensor in real time with no a priori environment map. We are interested in robustly solving the 3D localization problem of a moving RGBD sensor in real time. Knowing reliably the (3D) position opens the door to robust autonomous indoor navigation of drones, for instance.

Expected results: a JdeRobot component which provides a 3D pose estimation (position and orientation) in real time using the pose3D ICE interface

Knowledge Prerequisite: C++ programming skills,

Mentors: Francisco Rivas (franciscomiguel.rivas AT, Alberto Martín (almartinflorido AT

Project #8: Deep Learning to detect objects at images from RGBD sensors

Brief Explanation: Deep learning is a fast-growing field between computer vision researchers. There are several challenges such as COCO and Pascal VOC based on the application of this techniques to detect objects directly over the images. The main idea of this project is to apply this techniques by fusing rgb and depth sensor (data acquired from xtion sensors) to improve the object detection based only on rgb data.

Expected results: A cnn network configuration to combine depth and rgb data to detect objects

Knowledge Prerequisite: C++ programming skills, computer vision, object detection, classifications, deep learning, tensor flow

Mentors: Francisco Rivas (franciscomiguel.rivas AT

Application instructions for GSoC-2017

We welcome students to contact relevant mentors before submitting their application into GSoC official website. If in doubt for which project(s) to contact, send the mail to jmplaza AT and to franciscomiguel.rivas AT, and almartinflorigo AT We recommend browsing previous GSoC student pages to look for ready-to-use projects, and to get an idea of the expected amount of work for a valid GSoC proposal.


  • Git experience
  • C++ and Python programming experience (depending on the project)

Programming tests

The JdeRobot organization will prepare three small coding tests (standalone exercise or bug fix) before accepting any candidate proposal.

Send us your CV

Through email, to jmplaza AT AND franciscomiguel.rivas AT AND almartinflorigo AT


After doing the programming test, just send us this template with the requested information.

1. Contact details
  • Name and surname:
  • Country:
  • Email:
  • Public repository/ies:
  • Personal blog (optional):
  • Twitter/Identica/LinkedIn/others:
2. Your idea
  • Title
  • Brief description of the idea
  • The state of the software BEFORE your GSoC.
  • The addition that your project will bring to the software.
3. Timeline
  • Now split your project idea in smaller tasks. Quantify the time you think each task needs. Finally, draw a tentative project plan (timeline) including the dates covering all period of GSoC. Don’t forget to include also the days in which you don’t plan to code, because of exams, holidays etc.
  • Do you understand this is a serious commitment, equivalent to a full-time paid summer internship or summer job?
  • Do you have any known time conflicts during the official coding period?
4. Studies
  • What is your School and degree?
  • Would your application contribute to your ongoing studies/degree? If so, how?
5. Programming
  • Computing experience: operating systems you use on a daily basis, known programming languages, hardware, etc.
  • Robot or Computer Vision programming experience:
  • Other software programming:
6. GSoC participation
  • Have you participated to GSoC before?
  • How many times, which year, which project?
  • Have you applied but were not selected? When?
  • Have you submitted/will you submit another proposal for GSoC 2016 to a different org?

Previous GSoC students

How to increase your chances of being selected in GSoC-2017

If you put yourself in the shoes of the mentor that should select the student, you'll immediately realize that there are some behaviors that are usually rewarded. Here's some examples.

Be proactive

Mentors are more likely to select students that openly discuss the existing ideas and / or propose their own. It is a bad idea to just submit your idea only in the Google web site without discussing it, because it won't be noticed.

Demonstrate your skills

Consider that mentors are being contacted by several students that apply for the same project. A way to show that you are the best candidate, is to demonstrate that you are familiar with the software and you can code. How? Browse the bug tracker (issues in github of JdeRobot project), fix some bugs and propose your patch in mailing list, and/or ask mentors to challenge you! Moreover, bug fixes are a great way to get familiar with the code.

Demonstrate your intention to stay

Students that are likely to disappear after GSoC are less likely to be selected. This is because there is no point in developing something that won't be maintained. And moreover, one scope of GSoC is to bring new developers to the community.


Read the relevant information about GSoC in the wiki / web pages before asking. Most FAQs have been answered already!

Mailing List

The general mailing list for jderobot users and developers is jderobot [at] gsyc [dot] es. In this list we talk about its usage, configuration, new features, bugs, future developments, bug solutions...


  • Roberto Calvo (rocapal)
    • contributions: project maintainer, Surveillance, components in Android, 3DPeopleTracker
  • JoseMaría Cañas (jmplaza)
    • contributions: project maintainer, JdeRobot lead, progeo lib, fuzzylib, introrob
  • Satyaki Chakraborty (shady-cs15)
    • contributions: compatibiliy with ROS
  • Redouane Kachach (redo)
    • contributions: calibrator, imgrectifier, TrafficMonitor
  • Alberto Martín Florido (almartinflorido)
    • contributions: ArDroneServer, drones,
  • Aitor Martínez (aitormf)
    • contributions: package generation, web technology, cameraview.js, rgbdviewer.js, uavviewer.js
  • Luis Roberto Morales (lr-morales)
    • contributinos: project maintainer, cmake, gtk3Dviewer
  • Eduardo Perdices (eperdices)
    • contributions: visionlib, calibrator, mobileTeleoperator, replayer, opencvdemo
  • Francisco Pérez (fqez)
    • contributions: project maintainer
  • Samuel Rey (reysam93)
    • contributions: visualHFSM tool

  • Francisco Rivas (frivas, chanfr)
    • contributions: NaoBody, openniServer, kinectviewer, 3DPeopleTracker

Previous contributors

  • Gonzalo Abella (gdago)
    • contributions: v4l2 driver
  • Victor Arribas (varhub)
    • contributions: ArDroneServer (GPS support), libEasyIce, refactoring of quadrotor2 and flyingkinect plugins for Gazebo
  • Agustín Gallardo
    • contributions: calibrator
  • Teodoro González
    • contributions: gazebo driver
  • Maikel González
    • contributions: introrob, basic component, cmake
  • Alejandro Hernández (ahcorde)
    • contributions: replayer, recorder, kinectserver
  • Victor Hidalgo
    • contributions: CarSpeed
  • Lihang Li (hustcalm)
    • contributions: RTABmap component for visual 3D localization
  • David Lobato (dlobato)
    • contributions: software architecture design, ice expert, cameraserver
  • Javier Martín
    • contributions: evi driver, mplayer driver
  • Sara Marugán (salons)
    • contributions: ElderCare, cameraserver
  • Borja Menéndez (bmenendez)
    • contributions: project maintainer, NaoServer, Nao in Gazebo, visualHFSM
  • Andrei Militaru (militaru92)
    • contributions: ROScompatibility library
  • Juan Navarro (jnbosgos)
    • contributions: flyingkinect
  • Antonio Pineda
    • contributions: firewire driver, player driver, ElderCare
  • Rubén Salamanqués
    • contributions: visualHFSM
  • Jose Antonio Santos (jcaden)
    • contributions: project maintainer, wiimote driver, wiioperator, mastergui, graphics_gtk, opflow, mplayer
  • Julio Vega (jmvega)
    • contributions: JdeRobot manual, introrob, giraffeserver, visionlib, gazeboserver
  • Daniel Yagüe
    • contributions: Drones in Gazebo