TheRoboticsClub

From jderobot
Jump to: navigation, search

Do you wanna learn and enjoy programming robots, Artificial Intelligence or Computer Vision applications? We have a Robotics Club inside JdeRobot organization for that. Currently there are many active projects in AI and specifically DeepLearning. We are also working hard in new tools in JdeRobot. New developers, testers and users are welcome. Take a look at ongoing projects

To start working in the Robotics Club, just download JdeRobot, run several examples. Upload some videos of them on your computer to YouTube. Please send us (josemaria.plaza AT urjc.es) your CV to better know your programming, robotics or computer vision abilities and to better think in a suitable collaboration project for you. Python, C++ or JavaScript capabilities are desirable and useful.

Once accepted you would be assigned a mentor that would help, advise your collaboration and teach you the required abilities, etc... Typically we arrange periodic meetings through video conference (Google Hangouts) to see the following steps. After a few weeks you will have your own github repository on the organization and your wiki page at JdeRobot organization.


Contents

Ongoing projects

Proposed challenges

JdeRobot-Academy: fleet of robots for Amazon logistics store

Brief Explanation: JdeRobot-Academy is an framework for robotics and computer vision teaching, built on JdeRobot. It is composed of a collection of cool exercises in Python about robot programming and computer vision. One nice exercise to be included is the navigation of a fleet of robots, their path planning and coordination. The scenario is an Amazon warehouse, where the fleet of Kiva robots should autonomously move the goods from the providers input to the storing location and from there to the output bay. The robot model in Gazebo has to be developed, also the Python template node (with its GUI) that will host the student code and a tentative solution.


Expected results: a new robotics exercise in Gazebo.

Knowledge Prerequisite: Python programming skills.

Mentors: Alberto Martín (alberto.martinf AT urjc.es).

JdeRobot-Academy: robot navigation using Open Motion Planning Library

Brief Explanation: JdeRobot-Academy is an academic framework for teaching robotics and computer vision. It includes several exercises where each one includes a Python application that connects to the real or simulated robot and provides a template that the students have to fill with their code for the robot algorithms.

The main idea of this project is to introduce the OMPL (Open Motion Planning Library) into JdeRobot-Academy, in a new robot navigation exercise. For this task, the student will develop a new exercise and their solutions using different path planning algorithms of an autonomous wheeled robot or drone which moves along a known scenario in Gazebo.

Expected results: a new robotics exercise in Gazebo.

Knowledge Prerequisite: Python programming skills.

Mentors: Alberto Martín (alberto.martinf AT urjc.es).

Improving DetectionSuite deep learning tool including segmentation and classification tools

Brief Explanation: DetectionSuite is an on-development tool to test and train different DeepLearning architectures for object detection on images. It accepts several known international datasets like PASCALVOC and allows the comparison of several DeepLearning architectures over exactly the same test data. It computes several objective statistics and measures their performance. Currently it support YOLO architectures on Darknet framework. The goal of this project is to expand the supported datasets (ImageNet, COCO...) and expand the neural frameworks (Keras, TensorFlow, Caffe...). In addition several detection architectures should be trained and compared with the new release of the tool.

Expected results: a new release of DetectionSuite tool extending the existing functionality for objection but also for two new deep learning problems: classification and segmentation including new statistics for each of them.

Knowledge Prerequisite: C++ and Python programming skills.

Mentors: Francisco Rivas (franciscomiguel.rivas AT urjc.es).

Improving VisualStates tool, full compatibility with ROS

Brief Explanation: VisualStates is a tool for programming robot behaviors using automata. It combines a graphical language to specify the states and the transitions with a text language (Python or C++). It generates a ROS node which implements the automata and shows a GUI at runtime with the active state every time, for debugging. Take a look at some example videos. The current state of VisualStates only supports subscription and publish for topics. We aim to integrate all the communication features of the ROS and also basic packages that would be useful for behavior development. In the scope of this project the following improvements are targeted:

  1. The integration of ROS services, the behaviors will be able to call ROS services.
  2. The integration of ROS actionlib, the behaviors will be able to call actionlib services.
  3. The generating and reading smach behaviors in VisualStates and modify and generate new behaviors.

Expected results: a new release of VisualStates tool.

Knowledge Prerequisite: Python programming skills.

Mentors: Okan Asik (Okan Aşık <asik.okan AT gmail.com>).

Improving VisualStates tool, library of parameterized automata

Brief Explanation: VisualStates is a tool for programming robot behaviors using automata. It combines a graphical language to specify the states and the transitions with a text language (Python or C++). It generates a ROS node which implements the automata and shows a GUI at runtime with the active state every time, for debugging. Take a look at some example videos.

Every automaton created using VisualStates can be seen as a state itself and then be integrated in a larger automata. Therefore, the user would be able to add previously created behaviors as states. When importing those behaviors, the user would have two options; copying the behavior on the new behavior or keeping reference to the imported automata such that if it is changed, those changes are going to be reflected on the new behavior too. The idea of this project is to built and support an automata library. There will be a library of predefined behaviors (automata) for coping with usual tasks, so the user can just integrate them as new states on a new automata, without writing any code. In addition, each automaton may accept parameters to fine tune its behavior. For example, for moving forward a drone, we'll have a state 'moveForward', so the user only have to import that state indicating as a parameter the speed he wants.

Expected results: a new release of VisualStates tool.

Knowledge Prerequisite: Python and C++ programming skills.

Mentors: Okan Asik (Okan Aşık <asik.okan AT gmail.com>).

Improving SLAM-testbed tool

Brief Explanation: Simultaneous Localization and Mapping (SLAM) algorithms play a fundamental role for emerging technologies, such as autonomous cars or augmented reality, providing an accurate localization inside unknown environments. There are many approaches available with different characteristics in terms of accuracy, efficiency and robustness (ORB-SLAM, DSO, SVO, etc), but their results depend on the environment and resources available.

SLAM-testbed is a graphic tool to compare objectively different Visual SLAM approaches, evaluating them using several public benchmarks and statistical treatment, in order to compare them in terms of accuracy and efficiency. The main goal of this project is to increase the compatibility of this tool with new benchmarks and SLAM algorithms, so that it becomes an standard tool to evaluate future approaches.

The next video shows one of the SLAM algorithms (called ORB-SLAM) that will be evaluated with this tool:

Expected results: Add new benchmarks and SLAM algorithms to SLAM-testbed tool.

Knowledge Prerequisite: C++ programming skills

Mentors: Eduardo Perdices (eperdices AT gsyc.es)

Create realistic 3D maps from SLAM algorithms

Brief Explanation: Simultaneous Localization and Mapping (SLAM) algorithms play a fundamental role for emerging technologies, such as autonomous cars or augmented reality. These algorithms provide accurate localization inside unknown environments, however, the maps obtained with these techniques are often sparse and meaningless, composed by thousands of 3D points without any relation between them.

The goal of this project is to process the data obtained from SLAM approaches and create a realistic 3D map. The input data will consist of a dense 3D point cloud and a set of frames located in the map.

The next video shows one of the SLAM algorithms (called DSO) whose output data will be used to create the 3D map:

Expected results: A realistic 3D map from a 3D point cloud and frames.

Knowledge Prerequisite: Python or C++ programming skills

Mentors: Eduardo Perdices (eperdices AT gsyc.es)

JdeRobot-Kids: compiling Python to Arduino

Brief Explanation: JdeRobot-Kids is an academic framework for teaching robotics to children in a practical way. It is based on Python, the kids have to program typical robot behaviors like follow-line using Python. JdeRobot-Kids is now mostly centered in the mbot robot, from makeblock, both the real robot and the simulated one in Gazebo. Mbot is an Arduino based robot. Currently the student application runs at a regular computer, which is connected to the onboard Arduino. Arduino and PC interact using the Firmata protocol. This approach requires a continuous connection between the robot and the off-board computer. Arduino is limited on computer power so it is not enough to run a Python interpreter. The goal of this project is to "compile" the Python application to Arduino microprocessor. This way the kid program can be fully downloaded on the Mbot robot and run completely autonomous. Another possibility is to translate Python application to C/C++, as gcc/g++ already compiles it to Arduino microprocessor. Some ideas to explore are: LLVM compiler infrastructure, cython...

Expected results: Python application running on an Arduino microprocessor.

Knowledge Prerequisite: Python, compilers.

Mentors: JoséMaría Cañas (josemaria.plaza AT urjc.es) and Luis Roberto Morales (lr.morales.iglesias AT gmail.com).

VisualCircuit tool, digital electronics language for robot behaviors

Brief Explanation: In reconfigurable circuits (FPGAs) a hardware description language is used to visually specify the circuit configuration and its behavior. For instance, the open source IceStudio tool uses such visual language to configure FPGAs. The idea of this project is to explore the use of the same visual language to program reactive robot behaviors. There are blocks (existing circuits) and wires to connect their inputs and outputs. Instead of synthesizing the visual program into a FPGA implementation the goal is to synthesize it into a Python program. Each block is translated into a thread that runs a transforming function at fast iterations. Each iteration reads the block inputs, does some specific processing to compute the right values and writes them on its outputs. Each wire is translated into a shared variable where the blocks can write or read.

Expected results: a new tool to program reactive robot behaviors using a visual language based on blocks and wires.

Knowledge Prerequisite: Python programming skills.

Mentors: JoséMaría Cañas (josemaria.plaza AT urjc.es), Samuel Rey (samuel.rey.escudero AT gmail.com).


Previous projects