Tools

From jderobot
Jump to: navigation, search

Viewers and teleoperators[edit]

CameraView[edit]

CameraView is a JdeRobot tool made to show images from real and simulated cameras through an ICE interface or ROS messages.


This tool only has one thread who cares about getting images and showing them through the GUI. CameraView connects the interface to a camera server (real or simulated), takes images from it and each image is shown through an GUI class (called viewer'). Its graphical interface it is made with GTK library. This node is only a class that takes an image and shows it in a window. This tool uses the following interfaces:

Execution

In order to run the component, here you have a complete how to run CameraView

KobukiViewer[edit]

KobukiViewer tool allows the control and teleoperation of the JdeRobot wheeled robots as kobuki and pioneer. This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with motors interface.


This tool is intended to teleoperate wheeled robots both in real and simulated scenarios. For that purpose, the kobukiViewer tool connects to different ICE interfaces in order to get data from sensors and send data to the actuators (motors). This tool is programmed with Qt libraries and it has a very simple and intuitive GUI.

The GUI of this tool offers the possibility to teleoperate the robots through a 2D canvas that controls linear and angular velocity. This canvas shows the commanded v and w in real time, so you can see which velocity you are setting while moving the ball. Besides the teleoperation, kobukiViewer also allows to get data from sensors as: laser, cameras and pose3D with several checkboxes. It also provides information about the current v and w speed in real time.

This tool uses following ICE interfaces:

Execution

In order to run the component here you have a complete how to run the kobukiViewer with a simulated kobuki robot.

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

kobukiViewer.Motors.Proxy=Motors:tcp -h 0.0.0.0 -p 8999
kobukiViewer.Camera1.Proxy=cam_turtlebot_left:tcp -h 0.0.0.0 -p 8995
kobukiViewer.Camera2.Proxy=cam_turtlebot_right:tcp -h 0.0.0.0 -p 8994
kobukiViewer.Pose3D.Proxy=Pose3D:tcp -h 0.0.0.0 -p 8998
kobukiViewer.Laser.Proxy=Laser:tcp -h 0.0.0.0 -p 8996
kobukiViewer.Laser=0
kobukiViewer.Pose3Dencoders1.Proxy=Pose3DEncoders1:tcp -h 0.0.0.0 -p 8993
kobukiViewer.Pose3Dencoders2.Proxy=Pose3DEncoders2:tcp -h 0.0.0.0 -p 8992
kobukiViewer.Pose3Dmotors1.Proxy=Pose3DMotors1:tcp -h 0.0.0.0 -p 8991
kobukiViewer.Pose3Dmotors2.Proxy=Pose3DMotors2:tcp -h 0.0.0.0 -p 8990
kobukiViewer.Vmax=3
kobukiViewer.Wmax=0.7

Where you can specify the maximum v and w for the robot and specify the endpoints to connect all the interfaces.

RGBDViewer[edit]

This tool allows to show information from RGBD sensors as kinect (1 or 2) (Microsoft) and Xtion (Asus)

This tool is intended to show different type of images from RGBD sensors. It manages RGB images, DEPTH images and clouds of points (Pointcloud). RGBDViewer make use of two interfaces: camera and pointcloud.

The GUI of this tool is a little bit tricky. At launching it has 3 buttons: one for activate RGB image showing, one for activate DEPTH image showing and another one to show the 3D representation of the robot and the pointcloud. When the 3rd button is activated it shows some other buttons:

  * Show room on RGB
  * Show room on DEPTH
  * Clear projection lines
  * Reconstruct
      * DEPTH: shows pointcloud
      * RGB on DEPTH
  * Camera position: show the relative position of the camera in the world.

For basic use, you only have to know the functionality of the 3 main buttons: Camera RGB, cameraDEPTH and Reconstruct DEPTH, in order to show the RGB and DEPTH images, and the pointcloud respectively.

This tool uses only three ICE interfaces:

Execution

In order to run the component here you have a complete how to run the rgbdViewer tool with a simulated kinect sensor.


Configuration file

You can also check the configuration file from here

A configuration file example may be like this:

rgbdViewer.CameraRGBActive=1
rgbdViewer.CameraRGB.Fps=10
rgbdViewer.CameraRGB.Proxy=cameraA:tcp -h localhost -p 9999
rgbdViewer.CameraDEPTHActive=1
rgbdViewer.CameraDEPTH.Fps=10
rgbdViewer.CameraDEPTH.Proxy=cameraB:tcp -h localhost -p 9999
rgbdViewer.pointCloudActive=0
rgbdViewer.pointCloud.Fps=10
rgbdViewer.pointCloud.Proxy=pointcloud1:tcp -h localhost -p 9999
rgbdViewer.Pose3DMotorsActive=0
rgbdViewer.Pose3DMotors.Proxy=Pose3DMotors1:tcp -h 193.147.14.20 -p 9999
rgbdViewer.KinectLedsActive=0
rgbdViewer.KinectLeds.Proxy=kinectleds1:tcp -h 193.147.14.20 -p 9999
rgbdViewer.WorldFile=./config/fempsa/fempsa.cfg
rgbdViewer.Width=320
rgbdViewer.Height=240
rgbdViewer.Fps=15
rgbdViewer.Debug=1

UAV Viewer[edit]

UAV_Viewer is a component for teleoperation of UAV in JdeRobot, works with simulated environments and real scenarios (like as Ar.Drone 1 and 2).


This tool uses the next ICE interfaces:

Execution

Execution for simulated environments:


uav_viewer uav_viewer_simulated.cfg

Execution for teleoperate the real Ar.Drone:


uav_viewer uav_viewer.cfg

Configuration file

The typical configuration file for simulated ArDrone:

UAVViewer.Camera.Proxy=Camera:default -h 0.0.0.0 -p 9000
UAVViewer.Pose3D.Proxy=Pose3D:default -h 0.0.0.0 -p 9000
UAVViewer.CMDVel.Proxy=CMDVel:default -h 0.0.0.0 -p 9000
UAVViewer.Navdata.Proxy=Navdata:default -h 0.0.0.0 -p 9000
UAVViewer.Extra.Proxy=Extra:default -h 0.0.0.0 -p 9000

For real ArDrone

UAVViewer.Camera.Proxy=Camera:default -h 0.0.0.0 -p 9994
UAVViewer.Pose3D.Proxy=Pose3D:default -h 0.0.0.0 -p 9000
UAVViewer.CMDVel.Proxy=CMDVel:default -h 0.0.0.0 -p 9850
UAVViewer.Navdata.Proxy=Navdata:default -h 0.0.0.0 -p 9700
UAVViewer.Extra.Proxy=Extra:default -h 0.0.0.0 -p 9701


From keyboard

The following commands are also accepted from keyboard:

T - takeoff
L - land
C - change camera
W - move up
S - move down
A - rotate drone anti clockwise
D - rotate drone clockwise
8 - move forward
2 - move backward
6 - move rigth
4 - move left

NavigatorCamera[edit]

NavigatorCamera tool allows teleoperate flyingkinects in a simulated world in Gazebo. It allows the teleoperation both in position and velocity ways. To do so, the flyingkinect offers two different interfaces: one for the camera, and a pose3D interface that empower the control of the camera position in the world. This tool counts with a very intuitive user interface , so the control of the camera is easy for the user.


This tool is programmed to use three different threads in order to control: the GUI, the images from the camera and the speed control. Each one depending on the main loop of the tool, which also creates the ICE connections with all the interfaces that the flyingkinect offers.

The GUI of this tool has several buttons for the position control of the camera. It also contains two canvas to teleoperate it easily. The GUI is divided in three sections: one for the translation movement (left side), another one for the rotation movement (center) and another one to control the step of the movement and reset the position of the sensor; as you can see in the image below:



This tool uses only two ICE interfaces:

Execution

As long as this tool is made to teleoperate cameras (specially flyingkinects) it's necessary to launch a world with a sensor in it, in order to make this tool work. So for this example, we will run a wolrd with a flyingkinect in it:

gazebo flyingkinect.world

and then run the navigatorCamera tool:

navigatorCamera navigatorCamera.cfg


Flyingkinect are programmed to allow more than one camera in a single scene. So you can launch multiple instances of navigatorCamera (configuring the endpoints properly), one for each flyingkinect in the scene.

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

# Camera RGB interface
navigatorCamera.CameraRGB.Proxy=cameraRGB:tcp -h localhost -p 9997
navigatorCamera.CameraRGB.Format=RGB8
navigatorCamera.CameraRGB.Fps=10

# Pose3D interface
navigatorCamera.Pose3D.Proxy=Pose3d:default -h localhost -p 9987
navigatorCamera.Pose3D.Fps=10

# GUI activation flag
navigatorCamera.guiActivated=1
# Control activation flag
navigatorCamera.controlActivated=0
# speed activation flag
navigatorCamera.speedActivated=1

# Glade File
navigatorCamera.gladeFile=navigatorCamera.glade

navigatorCamera.TranslationStep=0.1
navigatorCamera.RotationStep=0.07

So you have different flags to activate/deactivate the GUI, the speed control or the total control of the camera. Also, you can specify the .glade directory and the step of the movement of the camera.

PanTilt Teleop[edit]

Pantilt Teleop is a JdeRobot tool made to handle a real Sony camera model Evi d100p through an ICE interface and a ROS driver.

This tool has two drivers involved. The first one is the ROS "usb_cam" driver, which is responsible for extracting the image from the captor. The other one employs an ICE interface to teleoperate the camera. The interface of this tool includes both a class that takes an image and shows it in a window and a window that offers the possiblity to teleoperate the camera.

Execution

Note that to make this component run, you need a image provider (a camera) connected to an USB port and listening at the ports you want to bind, namely, properly configured (through .cfg and .yml files). So first you have to run both usb cam and evi cam and then run the Pantilt_teleop.

The way to make it run is the following:

1. Run "usb_cam" in the first terminal(ensure you have installed ros-kinetic-usb-cam packet):

$ roslaunch usb_cam_test.launch

2. Run the evicam_driver in the second:

$ evicam_driver evicam_driver.cfg

3. Finally, Run the pantilt_teleop tool:

$ pantilt_teleop pantilt_teleop.yml

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

pantilt_teleop:
  PTMotors:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS (Not supported)
    Proxy: "PanTilt:default -h localhost -p 9977"
    Name: basic_component_pyPTMotors
  
  Camera:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraA:default -h localhost -p 9999"
    Format: RGB8
    Topic: "/usb_cam/image_raw"
    Name: pantilt_teleop_pyCamera

  NodeName: pantilt_teleop_py

Where you can specify the endpoints to connect all the interfaces.

WebTools[edit]

CameraViewJS[edit]

CameraViewJS is a JdeRobot tool made to show images from real and simulated cameras through an ICE interface using a Web browser.


This tool shows images from real and simulated cameras. For that purpose, the CameraViewJS tool connects to different ICE interfaces using WebSockets in order to get data from cameras. This tool is programmed with HTML, JavaScript and CSS

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, one canvas to show the camera of robot and a box to show FPS (frames per second) and image size.


This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run cameraserver:

cameraserver cameraserver.cfg

Then run CameraViewJS

cd /usr/local/share/jderobot/webtools/cameraviewjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777


To configurate the tool, press config button and put the configurarion


KobukiViewerJS[edit]

KobukiViewerJS tool allows the control and teleoperation of the JdeRobot wheeled robots as kobuki and pioneer with a Web Browser. This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with motors interface.


This tool is intended to teleoperate wheeled robots both in real and simulated scenarios. For that purpose, the kobukiViewerJS tool connects to different ICE interfaces using WebSockets in order to get data from sensors and send data to the actuators (motors). This tool is programmed with HTML, JavaScript and CSS

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, two camvas to show cameras of robot, other for laser in 2D and other two canvas, one for model 3d (This you can disable or enable) and the last for control


This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run gazebo with a model of the kobuki:

gazebo kobuki-simple.world

Then run KobukiViewerJS

cd /usr/local/share/jderobot/webtools/kobukiviewerjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777

To configurate the tool, press config button and put the configurarion


UavViewerJS[edit]

UavViewerJS is a component for teleopration of UAV in JdeRobot with a Web Browser, works with simulated environments and real scenarios (like as Ar.Drone 1 and 2) . This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with cmdvel interface.

Structure[edit]

This tool is intended to teleoperate UAV robots both in real and simulated scenarios. For that purpose, the UavViewerJS tool connects to different ICE interfaces using WebSockets in order to get data from sensors and send data to the actuators (cmdvel). This tool is programmed with HTML, JavaScript and CSS

Gui[edit]

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, one canvas to show camera of UAV and other for model 3d (This you can disable or enable) and the last for control.

In the first canvas also it has three buttons to take off, land and toggle cam, two sticks to teleoperate and flight indicators. If you do double touch in the camera image this canvas pass to fullscreen.

ICE Interfaces[edit]

This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run gazebo with a model of UAV:

gazebo road_drone_textures.world

Then run UavViewerJS

cd /usr/local/share/jderobot/webtools/uavviewerjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777

Configuration[edit]

To configurate the tool, press config button and put the configurarion

CameraCalibrator[edit]

This tool gives the possibility to efficiently calibrate cameras.

If not properly calibrated, they wont be able, for example, to project points in 3D points correctly or whatever. The calibration of a camera is to make the world lines fit with the image objects. in order to calibrate the camera, you have to use a chess pattern and show it to the camera, moving it as much as possible, to properly calibrate it. This tool calibrate the cameras frame by frame, so the more number of frames you choose to calibrate the camera, the better calibration it will have. So each time the camera finds a match with the pattern, that frame will count as calibrated; this process will repeat until all the frames are calibrated. The camera calibration parameters will be saved in a text file.

This tool consists of a single thread that takes images from the camera to be calibrated and show them in the GUI. It consist in a mainloop where the image is periodically updated and shown, meanwhile it calculates the intrinsic and extrinsic parameters of the camera in a specific way.

The GUI is one unique window showing the images captured from the camera. It has GUI elements as buttons or sliders, so it uses hotkeys to work. The hotkeys associated to this tool are:

  • g to start the calibration.
  • u to finish the process.

So once you start the tool, you have to show the chess pattern to the camera, and press g, then move the pattern betweeen the camera boundaries to properly calibrate it. Once the last frame is calibrated, you only have to press u in the keyboard to save the camera parameters in a file.

This tool uses only one ICE interface:

Execution

Note that to make this component run, you need a image provider (a camera) running and listening at the ports you want to bind and properly configured (through .cfg files) as explained [here]. So first you have to run a camera server and then run the cameraCalibrator.

The way to make it run is the following:

1. Run a camera server (we use CameraServer tool for this example):

$ cameraserver cameraserver.cfg

2. Run the CameraCalibrator tool:

$ cameraCalibrator cameraCalibrator.cfg

You can check the configuration file here

An example of this file:

cameraCalibrator.pattern.width=9
cameraCalibrator.pattern.height=6
cameraCalibrator.pattern.type=chessboard
cameraCalibrator.pattern.size=26.0f
cameraCalibrator.pattern.ratio=1.0f
cameraCalibrator.frames=20
cameraCalibrator.writePoints=1
cameraCalibrator.writeEstrinsics=1
cameraCalibrator.delay=2000

cameraCalibrator.nCameras=2
cameraCalibrator.camera.0.Proxy=cameraA:tcp -h localhost -p 9998
cameraCalibrator.camera.0.outFile=cameraA.yml
cameraCalibrator.camera.1.Proxy=cameraB:tcp -h localhost -p 9999
cameraCalibrator.camera.1.outFile=cameraB.yml

Where:

  • pattern.{width,height} indicates the number of cells (rows and columns) of the pattern used to calibrate the camera.
  • pattern.type indicates the type of the pattern. It can take these values: CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID. Note that you must have a proper pattern.
  • pattern.size indicates the total size of the pattern. Used to locate corners.
  • pattern.ratio aspect ratio of the pattern.
  • frames number of frames used to calibrate the camera. 20~30 is the optimus setting. If you have a powerful machine, you can set this number higher.
  • wirtePoints
  • writeEstrinsics flag to chosse if the extrinsic parameters must be saved. Intrinsic parameters are set true by default.
  • nCameras number of cameras to be calibrated. You have to set the following two lines for each camera you want to add to the calibration process, changing the proper names and ports, of course:
cameraCalibrator.camera.0.Proxy=cameraA:tcp -h localhost -p 9999
cameraCalibrator.camera.0.outFile=cameraA.yml

ColorTuner[edit]

Colortuner component implements three different image color filters in the next color spaces: RGB, HSV, YUV. It is an application to configure tailored color filters in HSV, RGB, or YUV color spaces. It is use to obtain optimal values of tint and saturation, as well as lighting, in that kind of filters. To perform the different color conversions between spaces we used the conventions appear in wikipedia, (HSV color conversions), and for YUV, (YUV color conversions) .

This tool only has one thread who cares about getting images and showing them throug the GUI. ColorTuner connects the ICE interface to a camera server (real or simulated) and takes images from it and each image is shown through an GUI class (called viewer').

This tool has an advanced user interface made with QT libraries. The GUI consists in two image frames, where the images taken from the camera will be shown (input imgae and output image with the filters applied).

- Moving the sliders, up and down to set the maximum and minimum values, for example of RED, GREEN and BLUE, to apply the filter.

This tool uses one ICE Interface:

Camera

Execution

Note that to make this component run, you need a image provider (a camera) running and listening at the ports you want to bind and properly configured (through .cfg files) as explained [here]. So first you have to run a camera server and then run the colorTuner.

The way to make it run is the following:

1. Run a camera server (we use CameraServer tool for this example):

$ cameraserver cameraserver.cfg

2. Run the CameraCalibrator tool:

$ colorTuner colorTuner.cfg

Configuration file

You can also check the configuration file here

To configure this component, you just have to specify where the images come from in the "colorTuner.cfg" file. Here, there is a configuration example:

CameraView.Camera.Proxy=cameraA:tcp -h 127.0.0.1 -p 9999

Where "127.0.0.1" is the IP address of the device (local IP in the example) and "9999" is the port number that must coincide with the port and CameraA is the name of the ICE interface we are connecting to. of the service provider(cameraserver, gazebosever, etc).



Recording logs and replaying from them[edit]

(Deprecated, but in operation while we adopt the ROS bags tool)

Recorder[edit]

Recorder is a tool intended to record information from JdeRobot interfaces such as: cameras, encoders, motors, lasers, etc.

This tool requires a JdeRobot interface connected and transmitting information through an ICE interface. This information is stored in a file with .jde extension (like ROS bag files) which can be readed by another JdeRobot tool called Replayer. All the recording configuration has to be done through configuration files. It's essential to understand the .cfg file of this tool in order to get a good recording. This tool allows you can record a travel and then execute differents algorithms and check or compare this algorithms with the same data.

This component has a very simple GUI only for start/stop the recording and show the frames per second and the iteration as additional information for the user. So, the configuration for this tool is full made by the configuration file.

Recorder works with almost all the provided JdeRobot interfaces, more exactly these ones:

Execution

In order to run Recorder, you have to have a robot or sensor running and connected (at least one of their interfaces), so for this example, we will record the cameras from a simulated Pioneer robot. Run gazebo with the pioneer simulated:

gazebo pioneer2dxJde.world

and then run Recorder. NOTE: you have to proper configure the tool to record the correct interface, so go to Configuration file section to learn how to

recorder recorder.cfg

Configuration file

You can also check the configuration file here

A configuration file example may be like this (example for the Pioneer cameras record):

Recorder.FileName=datos.jde
Recorder.nCameras=2
Recorder.nDethSensors=0
Recorder.nLasers=0
Recorder.DepthSensor1.Proxy=pointcloud1:tcp -h 127.0.0.1 -p 9998
Recorder.DepthSensor2.Proxy=pointcloud1:tcp -h 127.0.0.1 -p 9998
Recorder.Laser1.Proxy=Laser:tcp -h localhost -p 9996
Recorder.Camera1.Proxy=cam_pioneer_left:tcp -h localhost -p 9995
Recorder.Camera2.Proxy=cam_pioneer_right:tcp -h localhost -p 9994
Recorder.Camera3.Proxy=cameraA:tcp -h localhost -p 9998
Recorder.Camera4.Proxy=cameraB:tcp -h localhost -p 9998
Recorder.nPose3dEncoders=0
Recorder.Pose3DEncoders1.Proxy=Pose3DEncoders1:tcp -h localhost -p 9993
Recorder.Pose3DEncoders2.Proxy=pose3dencoders2:tcp -h localhost -p 9999
Recorder.nEncoders=0
Recorder.Encoders1.Proxy=Encoders:tcp -h localhost -p 9997
Recorder.GUI=1
Recorder.nConsumers=1
Recorder.poolSize=10
Recorder.Hostname=localhost
Recorder.Port=9990
  • Recoder.n{Interface} indicate the number of sensors to record. In this case we want to record the images taken from the left and the right cameras of the Pioneer robot, so nCameras=2
  • Recorder.{Interface}.Proxy usual ICE binding configuration. Indicates where to find the desired interface.

NOTE: If you close with Ctr+C this program, edit the result file removing the last records, because the recording could be incomplete.

Replayer[edit]

This tool allows to replay the information saved from the Recorder tool.

Replayer is able to read the information stored by Recorder (in .jde format) and reproduce it in some specific scenarios. Replayer takes the information recoded from the different interfaces and recreates the same interfaces providing the information that they gave in the moment of the recording. So you have a loop of the interface returning the exact same values as they were recorded.

This component has GUI, but is not active for the moment, till the revision of the tool.

The ICE interfaces that Replayer allows are exactly the same as Recorder:


Execution

This component has no GUI, for controlling the replay until the revision of the tool; but there is a workaround using a tool specific for this component named replayController. So in order to replay an interface, there is no need to run replayController if you only want to see a loop of the recording, but is recommended to run both tools to get a better experience. Also, each interface needs to be replayed with it's own tool, so following the example started in Recorder, we will explain how to execute the recordings made for the simulated Pioneer.

  • Run the replayer tool
replayer replayer.cfg
  • Run the replayController tool
replayController replayController.cfg
  • Run cameraview to see the recorded images from the simulated Pioneer
cameraview cameraview.cfg

NOTE: you have to configure each tool properly in order to make them work together

Configuration file

You can also check the configuration file from here

A configuration file example may be like this (example for the Pioneer cameras record):


# Log
# Levels: 0(DEBUG), 1(INFO), 2(WARNING), 3(ERROR)

Replayer.Log.File.Name=./log/Replayer.txt
Replayer.Log.File.Level=0
Replayer.Log.Screen.Level=0

NamingService.Enabled=0
NamingService.Proxy=NamingServiceJdeRobot:default -h 0.0.0.0 -p 10000

#without registry
Replayer.Endpoints=default -h localhost -p 9999

Replayer.nCameras=1
Replayer.nPointClouds=0
Replayer.nPose3dEncoders=0
Replayer.nEncoders=0
Replayer.nLasers=0
Replayer.replayControl.Active=1
Replayer.replayControl.Name=replayControllerA


#camera 1
Replayer.Camera.0.Name=cameraA
Replayer.Camera.0.ImageWidth=640
Replayer.Camera.0.ImageHeight=480
Replayer.Camera.0.Format=RGB8
Replayer.Camera.0.Dir=../recorder/data/images/camera1/
Replayer.Camera.0.FileFormat=png

#camera 2
Replayer.Camera.1.Name=cameraB
Replayer.Camera.1.ImageWidth=320
Replayer.Camera.1.ImageHeight=240
Replayer.Camera.1.Format=RGB8
Replayer.Camera.1.Dir=../recorder/data/images/camera2/
Replayer.Camera.1.FileFormat=png

Replayer.Hostname=localhost
Replayer.Port=9998

Replayer.Speed=1
Replayer.FileName=../recorder
Replayer.Sensors=../recorder/datos.jde

Pay attention on the FileName and Sensors, it is where the recorded file is located.

  • Replayer.Endpoints port where the interfaces are going to be created
  • Replayer.replayControl.Active allow replayController to be binded
  • Replayer.Camera.X.Dir where the frames was stored by recorder. (show recorder configuration files to locate them)

VisualStates[edit]

VisualStates is a tool for the programming of robot behaviors using hierarchy finite state machines (HFSM - Hierarchichal Finite State Machine). It represents the robot's behavior graphically on a canvas composed of states and transitions. When the automata is in a certain state it will pass from one state to another depending on the conditions established in the transitions. This graphical representation allows a higher level of abstraction for the user, as she only has to worry about programming the current robot actions and select what components he may need of the robot's interface.

For a more detailed information about using visualStates, click here

There are several ready-to-use examples available here. In this video you can see also a simple example of a drone going thought different behaviors.

In VisualStates there are three different functionalities. First, it is the tool itself, visualStates, which we have already explained before. This tool allows to generate code in two languages: C++ and python, and for each language, the tool also offers a runtime GUI for helping with the debugging process. You can also generate code as a ROS Node both in C++ and python language.

Enabling runtime GUI[edit]

The runtime GUI is DISABLED by default, as it does not have sense to execute when executing the code in an actual robot.

When executing an automata created with visualStates, if you want it to display the GUI you mas ENABLED it, by calling it with the argument --displaygui=true.

For example:
./automataName automataName.yml --displaygui=true 

This argument is the same in C++ and in python.

Execution[edit]

If the created behavior is ICE c++ component, you are going to create a build directory, configure with cmake and build the behavior before the execution by running the following command from the directory of the behavior

mkdir build
cd build
cmake ..
make
cd ..
./behaviorName behaviorName.yml --displaygui=true

If you have generated a python behavior, you can directly run the behavior from the behavior directory as such

./behaviorName.py behaviorName.yml --displaygui=true

If you generate a ROS node, you have to make sure that your behavior directory is inside your catkin workspace. The generated behavior will be a new ROS package. You have to compile your workspace and run as ROS node such as:

cd catkin_ws
catkin_make
rosrun behaviorName behaviorName --displaygui=true


Scratch4Robots[edit]

This tool allows to program complex robots like TurtleBot or drones with the visual language Scratch

3DViewer[edit]

3DViewer is a generic 3D viewer tool that allows to see a reconstruction of the 3D world in a pointcloud format.

This tool offers a viewer that shows a reconstruction of a 3D world captured by vision sensors as cameras or RGBD sensors. It is useful to use with 3D world mapping algorithms, showing the reconstruction given by it.

DetectionSuite[edit]

(Under construction)

This is a tool for DeepLearning applications. Its development has its own public repository

  • It accepts several image databases
  • It allows fast annotation of object detection databases
  • It shows statistics of neural network performance over images from several detection databases
  • It may run networks from several DeepLearning middlware (YOLO...)

Academy[edit]

This tool is a whole teaching framework using ROS and JdeRobot to teach robotics and computer vision. It includes a collection of around twenty different exercises of robot programming and computer vision programming. It has its own web page and repository.