Handbook

From jderobot
Jump to: navigation, search

Introduction[edit]

JdeRobot is a software suite for developing robotics and computer vision applications. These domains include sensors (for instance, cameras), actuators, and intelligent software in between. It is mostly written in C++ and Python languages and provides a distributed component-based programming environment where the application program is made up of a collection of several concurrent asynchronous components. They may run in different computers and are connected using ICE communication middleware or ROS messages. Components interoperate through explicit interfaces. Each one may have its own independent Graphical User Interface or none at all.

JdeRobot simplifies the access to hardware devices from the application program. Getting sensor measurements or ordering motor commands are done calling local functions. The platform attaches those calls to driver components which are connected to sensor or actuator devices (real or simulated ones, remote or local). Those functions build the API for the Hardware Abstraction Layer. Currently supported robots and devices:

  • RGBD sensors: Kinect and Kinect2 from Microsoft, Asus Xtion
  • Wheeled robots: TurtleBot from Yujin Robot and Pioneer from MobileRobotics Inc.
  • ArDrone quadrotor from Parrot
  • Laser Scanners: LMS from SICK, URG from Hokuyo and RPLidar
  • Gazebo simulator
  • Firewire cameras, USB cameras, video files (mpeg, avi...), IP cameras (like Axis)


JdeRobot includes several robot programming tools and libraries:

  • teleoperators for several robots, viewing their sensors and commanding their motors. Some of them are web-based and run on smartphones.
  • VisualStates tool for programming robot behavior using hierarchical Finite State Machines
  • Scratch2JdeRobot tool for programming robots (including drones) with the standard graphical language
  • a camera calibrator
  • a tool for tunning color filters
  • a library to develop fuzzy controllers, a library for projective geometry and computer vision processing.


JdeRobot is open-source software, licensed as GPL and LGPL. It also uses third-party software like Gazebo simulator, ROS, OpenGL, GTK, Qt, Player, Stage, GSL, OpenCV, PCL, Eigen, Ogre. It is ROS compatible!.

Tools[edit]

Viewers and teleoperators[edit]

CameraView[edit]

CameraView is a JdeRobot tool made to show images from real and simulated cameras through an ICE interface or ROS messages.


This tool only has one thread who cares about getting images and showing them through the GUI. CameraView connects the interface to a camera server (real or simulated), takes images from it and each image is shown through an GUI class (called viewer'). Its graphical interface it is made with GTK library. This node is only a class that takes an image and shows it in a window. This tool uses the following interfaces:

Execution

In order to run the component, here you have a complete how to run CameraView

CameraView-web[edit]

CameraViewJS is a JdeRobot tool made to show images from real and simulated cameras through an ICE interface using a Web browser.


This tool shows images from real and simulated cameras. For that purpose, the CameraViewJS tool connects to different ICE interfaces using WebSockets in order to get data from cameras. This tool is programmed with HTML, JavaScript and CSS

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, one canvas to show the camera of robot and a box to show FPS (frames per second) and image size.


This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run cameraserver:

cameraserver cameraserver.cfg

Then run CameraViewJS

cd /usr/local/share/jderobot/webtools/cameraviewjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777


To configurate the tool, press config button and put the configurarion


KobukiViewer[edit]

KobukiViewer tool allows the control and teleoperation of the JdeRobot wheeled robots as kobuki and pioneer. This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with motors interface.


This tool is intended to teleoperate wheeled robots both in real and simulated scenarios. For that purpose, the kobukiViewer tool connects to different ICE interfaces in order to get data from sensors and send data to the actuators (motors). This tool is programmed with Qt libraries and it has a very simple and intuitive GUI.

The GUI of this tool offers the possibility to teleoperate the robots through a 2D canvas that controls linear and angular velocity. This canvas shows the commanded v and w in real time, so you can see which velocity you are setting while moving the ball. Besides the teleoperation, kobukiViewer also allows to get data from sensors as: laser, cameras and pose3D with several checkboxes. It also provides information about the current v and w speed in real time.

This tool uses following ICE interfaces:

Execution

In order to run the component here you have a complete how to run the kobukiViewer with a simulated kobuki robot.

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

kobukiViewer.Motors.Proxy=Motors:tcp -h 0.0.0.0 -p 8999
kobukiViewer.Camera1.Proxy=cam_turtlebot_left:tcp -h 0.0.0.0 -p 8995
kobukiViewer.Camera2.Proxy=cam_turtlebot_right:tcp -h 0.0.0.0 -p 8994
kobukiViewer.Pose3D.Proxy=Pose3D:tcp -h 0.0.0.0 -p 8998
kobukiViewer.Laser.Proxy=Laser:tcp -h 0.0.0.0 -p 8996
kobukiViewer.Laser=0
kobukiViewer.Pose3Dencoders1.Proxy=Pose3DEncoders1:tcp -h 0.0.0.0 -p 8993
kobukiViewer.Pose3Dencoders2.Proxy=Pose3DEncoders2:tcp -h 0.0.0.0 -p 8992
kobukiViewer.Pose3Dmotors1.Proxy=Pose3DMotors1:tcp -h 0.0.0.0 -p 8991
kobukiViewer.Pose3Dmotors2.Proxy=Pose3DMotors2:tcp -h 0.0.0.0 -p 8990
kobukiViewer.Vmax=3
kobukiViewer.Wmax=0.7

Where you can specify the maximum v and w for the robot and specify the endpoints to connect all the interfaces.

KobukiViewer-web[edit]

KobukiViewerJS tool allows the control and teleoperation of the JdeRobot wheeled robots as kobuki and pioneer with a Web Browser. This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with motors interface.


This tool is intended to teleoperate wheeled robots both in real and simulated scenarios. For that purpose, the kobukiViewerJS tool connects to different ICE interfaces using WebSockets in order to get data from sensors and send data to the actuators (motors). This tool is programmed with HTML, JavaScript and CSS

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, two camvas to show cameras of robot, other for laser in 2D and other two canvas, one for model 3d (This you can disable or enable) and the last for control


This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run gazebo with a model of the kobuki:

gazebo kobuki-simple.world

Then run KobukiViewerJS

cd /usr/local/share/jderobot/webtools/kobukiviewerjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777

To configurate the tool, press config button and put the configurarion

RGBDViewer[edit]

This tool allows to show information from RGBD sensors as kinect (1 or 2) (Microsoft) and Xtion (Asus)

This tool is intended to show different type of images from RGBD sensors. It manages RGB images, DEPTH images and clouds of points (Pointcloud). RGBDViewer make use of two interfaces: camera and pointcloud.

The GUI of this tool is a little bit tricky. At launching it has 3 buttons: one for activate RGB image showing, one for activate DEPTH image showing and another one to show the 3D representation of the robot and the pointcloud. When the 3rd button is activated it shows some other buttons:

  * Show room on RGB
  * Show room on DEPTH
  * Clear projection lines
  * Reconstruct
      * DEPTH: shows pointcloud
      * RGB on DEPTH
  * Camera position: show the relative position of the camera in the world.

For basic use, you only have to know the functionality of the 3 main buttons: Camera RGB, cameraDEPTH and Reconstruct DEPTH, in order to show the RGB and DEPTH images, and the pointcloud respectively.

This tool uses only three ICE interfaces:

Execution

In order to run the component here you have a complete how to run the rgbdViewer tool with a simulated kinect sensor.


Configuration file

You can also check the configuration file from here

A configuration file example may be like this:

rgbdViewer.CameraRGBActive=1
rgbdViewer.CameraRGB.Fps=10
rgbdViewer.CameraRGB.Proxy=cameraA:tcp -h localhost -p 9999
rgbdViewer.CameraDEPTHActive=1
rgbdViewer.CameraDEPTH.Fps=10
rgbdViewer.CameraDEPTH.Proxy=cameraB:tcp -h localhost -p 9999
rgbdViewer.pointCloudActive=0
rgbdViewer.pointCloud.Fps=10
rgbdViewer.pointCloud.Proxy=pointcloud1:tcp -h localhost -p 9999
rgbdViewer.Pose3DMotorsActive=0
rgbdViewer.Pose3DMotors.Proxy=Pose3DMotors1:tcp -h 193.147.14.20 -p 9999
rgbdViewer.KinectLedsActive=0
rgbdViewer.KinectLeds.Proxy=kinectleds1:tcp -h 193.147.14.20 -p 9999
rgbdViewer.WorldFile=./config/fempsa/fempsa.cfg
rgbdViewer.Width=320
rgbdViewer.Height=240
rgbdViewer.Fps=15
rgbdViewer.Debug=1

UAV Viewer[edit]

UAV_Viewer is a component for teleoperation of UAV in JdeRobot, works with simulated environments and real scenarios (like as Ar.Drone 1 and 2).


This tool uses the next ICE interfaces:

Execution

Execution for simulated environments:


uav_viewer uav_viewer_simulated.cfg

Execution for teleoperate the real Ar.Drone:


uav_viewer uav_viewer.cfg

Configuration file

The typical configuration file for simulated ArDrone:

UAVViewer.Camera.Proxy=Camera:default -h 0.0.0.0 -p 9000
UAVViewer.Pose3D.Proxy=Pose3D:default -h 0.0.0.0 -p 9000
UAVViewer.CMDVel.Proxy=CMDVel:default -h 0.0.0.0 -p 9000
UAVViewer.Navdata.Proxy=Navdata:default -h 0.0.0.0 -p 9000
UAVViewer.Extra.Proxy=Extra:default -h 0.0.0.0 -p 9000

For real ArDrone

UAVViewer.Camera.Proxy=Camera:default -h 0.0.0.0 -p 9994
UAVViewer.Pose3D.Proxy=Pose3D:default -h 0.0.0.0 -p 9000
UAVViewer.CMDVel.Proxy=CMDVel:default -h 0.0.0.0 -p 9850
UAVViewer.Navdata.Proxy=Navdata:default -h 0.0.0.0 -p 9700
UAVViewer.Extra.Proxy=Extra:default -h 0.0.0.0 -p 9701


From keyboard

The following commands are also accepted from keyboard:

T - takeoff
L - land
C - change camera
W - move up
S - move down
A - rotate drone anti clockwise
D - rotate drone clockwise
8 - move forward
2 - move backward
6 - move rigth
4 - move left

UavViewer-web[edit]

UavViewerJS is a component for teleopration of UAV in JdeRobot with a Web Browser, works with simulated environments and real scenarios (like as Ar.Drone 1 and 2) . This component allows conncetion on demand, so you don't need to have all the interfaces operatives to launch this tool. For example, you can work only with cmdvel interface.

Structure[edit]

This tool is intended to teleoperate UAV robots both in real and simulated scenarios. For that purpose, the UavViewerJS tool connects to different ICE interfaces using WebSockets in order to get data from sensors and send data to the actuators (cmdvel). This tool is programmed with HTML, JavaScript and CSS

Gui[edit]

The GUI of this tool is made with HTML to be used with a web Browser. The Gui is composed by three buttons to Start, Stop and open configuration of Tool, one canvas to show camera of UAV and other for model 3d (This you can disable or enable) and the last for control.

In the first canvas also it has three buttons to take off, land and toggle cam, two sticks to teleoperate and flight indicators. If you do double touch in the camera image this canvas pass to fullscreen.

ICE Interfaces[edit]

This tool uses following ICE interfaces:

Execution[edit]

To run this component, first you have to run gazebo with a model of UAV:

gazebo road_drone_textures.world

Then run UavViewerJS

cd /usr/local/share/jderobot/webtools/uavviewerjs
node run.js

and finally, put the following in a web browser:

http://localhost:7777

Configuration[edit]

To configurate the tool, press config button and put the configurarion

NavigatorCamera[edit]

NavigatorCamera tool allows teleoperate flyingkinects in a simulated world in Gazebo. It allows the teleoperation both in position and velocity ways. To do so, the flyingkinect offers two different interfaces: one for the camera, and a pose3D interface that empower the control of the camera position in the world. This tool counts with a very intuitive user interface , so the control of the camera is easy for the user.


This tool is programmed to use three different threads in order to control: the GUI, the images from the camera and the speed control. Each one depending on the main loop of the tool, which also creates the ICE connections with all the interfaces that the flyingkinect offers.

The GUI of this tool has several buttons for the position control of the camera. It also contains two canvas to teleoperate it easily. The GUI is divided in three sections: one for the translation movement (left side), another one for the rotation movement (center) and another one to control the step of the movement and reset the position of the sensor; as you can see in the image below:



This tool uses only two ICE interfaces:

Execution

As long as this tool is made to teleoperate cameras (specially flyingkinects) it's necessary to launch a world with a sensor in it, in order to make this tool work. So for this example, we will run a wolrd with a flyingkinect in it:

gazebo flyingkinect.world

and then run the navigatorCamera tool:

navigatorCamera navigatorCamera.cfg


Flyingkinect are programmed to allow more than one camera in a single scene. So you can launch multiple instances of navigatorCamera (configuring the endpoints properly), one for each flyingkinect in the scene.

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

# Camera RGB interface
navigatorCamera.CameraRGB.Proxy=cameraRGB:tcp -h localhost -p 9997
navigatorCamera.CameraRGB.Format=RGB8
navigatorCamera.CameraRGB.Fps=10

# Pose3D interface
navigatorCamera.Pose3D.Proxy=Pose3d:default -h localhost -p 9987
navigatorCamera.Pose3D.Fps=10

# GUI activation flag
navigatorCamera.guiActivated=1
# Control activation flag
navigatorCamera.controlActivated=0
# speed activation flag
navigatorCamera.speedActivated=1

# Glade File
navigatorCamera.gladeFile=navigatorCamera.glade

navigatorCamera.TranslationStep=0.1
navigatorCamera.RotationStep=0.07

So you have different flags to activate/deactivate the GUI, the speed control or the total control of the camera. Also, you can specify the .glade directory and the step of the movement of the camera.

PanTilt Teleop[edit]

Pantilt Teleop is a JdeRobot tool made to handle a real Sony camera model EVI D100P through an ICE interface and a ROS driver.

This tool has two drivers involved. The first one is the ROS "usb_cam" driver, which is responsible for extracting the image from the captor. The other one employs an ICE interface to teleoperate the camera. The interface of this tool includes both a class that takes an image and shows it in a window and a window that offers the possiblity to teleoperate the camera.

Execution

Note that to make this component run, you need a image provider (a camera) connected to an USB port and listening at the ports you want to bind, namely, properly configured (through .cfg and .yml files). So first you have to run both usb cam and evi cam and then run the Pantilt_teleop.

The way to make it run is the following:

0. When you connect the device to your computer, you use two USB ports (one for the camera image, and another one for the VISCA interface, to send the commands to the motors). This results on two device mappings on the /dev/ directory:

- For the camera, a new video device will be added (/dev/video{0,1,2...}). If you have a laptop, /dev/video0 will be your main webcam, so the EVI camera will be the highest number achieved. You will need to know this number for the usb_cam-test.launch file, where you will have to specify it (ROS needs it to know where to grab the images). Be careful, because if you disconnect/connect the camera several times, the number will keep incrementing until you reboot your machine.
- For the VISCA motors, they will map to the /dev/ttyUSB0 device. For drivers reasons, this device needs to be granted r/w permissions every time it is connected to the computer. Just:
           $ sudo chmod 666 /dev/ttyUSB0

The required configuration files can be found at this repository.



1. Run "usb_cam" in the first terminal(ensure you have installed ros-kinetic-usb-cam packet):

$ roslaunch usb_cam-test.launch

2. Run the evicam_driver in the second:

$ evicam_driver evicam_driver.cfg

3. Finally, Run the pantilt_teleop tool:

$ pantilt_teleop pantilt_teleop.yml

Configuration file

You can also check the configuration file here

A configuration file example may be like this:

pantilt_teleop:
  PTMotors:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS (Not supported)
    Proxy: "PanTilt:default -h localhost -p 9977"
    Name: basic_component_pyPTMotors
  
  Camera:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraA:default -h localhost -p 9999"
    Format: RGB8
    Topic: "/usb_cam/image_raw"
    Name: pantilt_teleop_pyCamera

  NodeName: pantilt_teleop_py

Where you can specify the endpoints to connect all the interfaces.

CameraCalibrator[edit]

This tool gives the possibility to efficiently calibrate cameras.

If not properly calibrated, they wont be able, for example, to project points in 3D points correctly or whatever. The calibration of a camera is to make the world lines fit with the image objects. in order to calibrate the camera, you have to use a chess pattern and show it to the camera, moving it as much as possible, to properly calibrate it. This tool calibrate the cameras frame by frame, so the more number of frames you choose to calibrate the camera, the better calibration it will have. So each time the camera finds a match with the pattern, that frame will count as calibrated; this process will repeat until all the frames are calibrated. The camera calibration parameters will be saved in a text file.

This tool consists of a single thread that takes images from the camera to be calibrated and show them in the GUI. It consist in a mainloop where the image is periodically updated and shown, meanwhile it calculates the intrinsic and extrinsic parameters of the camera in a specific way.

The GUI is one unique window showing the images captured from the camera. It has GUI elements as buttons or sliders, so it uses hotkeys to work. The hotkeys associated to this tool are:

  • g to start the calibration.
  • u to finish the process.

So once you start the tool, you have to show the chess pattern to the camera, and press g, then move the pattern betweeen the camera boundaries to properly calibrate it. Once the last frame is calibrated, you only have to press u in the keyboard to save the camera parameters in a file.

This tool uses only one ICE interface:

Execution

Note that to make this component run, you need a image provider (a camera) running and listening at the ports you want to bind and properly configured (through .cfg files) as explained [here]. So first you have to run a camera server and then run the cameraCalibrator.

The way to make it run is the following:

1. Run a camera server (we use CameraServer tool for this example):

$ cameraserver cameraserver.cfg

2. Run the CameraCalibrator tool:

$ cameraCalibrator cameraCalibrator.cfg

You can check the configuration file here

An example of this file:

cameraCalibrator.pattern.width=9
cameraCalibrator.pattern.height=6
cameraCalibrator.pattern.type=chessboard
cameraCalibrator.pattern.size=26.0f
cameraCalibrator.pattern.ratio=1.0f
cameraCalibrator.frames=20
cameraCalibrator.writePoints=1
cameraCalibrator.writeEstrinsics=1
cameraCalibrator.delay=2000

cameraCalibrator.nCameras=2
cameraCalibrator.camera.0.Proxy=cameraA:tcp -h localhost -p 9998
cameraCalibrator.camera.0.outFile=cameraA.yml
cameraCalibrator.camera.1.Proxy=cameraB:tcp -h localhost -p 9999
cameraCalibrator.camera.1.outFile=cameraB.yml

Where:

  • pattern.{width,height} indicates the number of cells (rows and columns) of the pattern used to calibrate the camera.
  • pattern.type indicates the type of the pattern. It can take these values: CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID. Note that you must have a proper pattern.
  • pattern.size indicates the total size of the pattern. Used to locate corners.
  • pattern.ratio aspect ratio of the pattern.
  • frames number of frames used to calibrate the camera. 20~30 is the optimus setting. If you have a powerful machine, you can set this number higher.
  • wirtePoints
  • writeEstrinsics flag to chosse if the extrinsic parameters must be saved. Intrinsic parameters are set true by default.
  • nCameras number of cameras to be calibrated. You have to set the following two lines for each camera you want to add to the calibration process, changing the proper names and ports, of course:
cameraCalibrator.camera.0.Proxy=cameraA:tcp -h localhost -p 9999
cameraCalibrator.camera.0.outFile=cameraA.yml

ColorTuner[edit]

Colortuner component implements three different image color filters in the next color spaces: RGB, HSV, YUV. It is an application to configure tailored color filters in HSV, RGB, or YUV color spaces. It is use to obtain optimal values of tint and saturation, as well as lighting, in that kind of filters. To perform the different color conversions between spaces we used the conventions appear in wikipedia, (HSV color conversions), and for YUV, (YUV color conversions) .

This tool only has one thread who cares about getting images and showing them throug the GUI. ColorTuner connects the ICE interface to a camera server (real or simulated) and takes images from it and each image is shown through an GUI class (called viewer').

This tool has an advanced user interface made with QT libraries. The GUI consists in two image frames, where the images taken from the camera will be shown (input imgae and output image with the filters applied).

- Moving the sliders, up and down to set the maximum and minimum values, for example of RED, GREEN and BLUE, to apply the filter.

This tool uses one ICE Interface:

Camera

Execution

In order to run the component, here you have a complete how to run ColorTuner


VisualStates[edit]

VisualStates is a tool for the programming of robot behaviors using hierarchy finite state machines (HFSM - Hierarchichal Finite State Machine). It represents the robot's behavior graphically on a canvas composed of states and transitions. When the automata is in a certain state it will pass from one state to another depending on the conditions established in the transitions. This graphical representation allows a higher level of abstraction for the user, as she only has to worry about programming the current robot actions and select what components he may need of the robot's interface.

For a more detailed information about using visualStates, click here

There are several ready-to-use examples available here. In this video you can see also a simple example of a drone going thought different behaviors.

In VisualStates there are three different functionalities. First, it is the tool itself, visualStates, which we have already explained before. This tool allows to generate code in two languages: C++ and python, and for each language, the tool also offers a runtime GUI for helping with the debugging process. You can also generate code as a ROS Node both in C++ and python language.

Enabling runtime GUI[edit]

The runtime GUI is DISABLED by default, as it does not have sense to execute when executing the code in an actual robot.

When executing an automata created with visualStates, if you want it to display the GUI you mas ENABLED it, by calling it with the argument --displaygui=true.

For example:
./automataName automataName.yml --displaygui=true 

This argument is the same in C++ and in python.

Execution[edit]

If the created behavior is ICE c++ component, you are going to create a build directory, configure with cmake and build the behavior before the execution by running the following command from the directory of the behavior

mkdir build
cd build
cmake ..
make
cd ..
./behaviorName behaviorName.yml --displaygui=true

If you have generated a python behavior, you can directly run the behavior from the behavior directory as such

./behaviorName.py behaviorName.yml --displaygui=true

If you generate a ROS node, you have to make sure that your behavior directory is inside your catkin workspace. The generated behavior will be a new ROS package. You have to compile your workspace and run as ROS node such as:

cd catkin_ws
catkin_make
rosrun behaviorName behaviorName --displaygui=true


Scratch4Robots[edit]

This tool allows to program complex robots like TurtleBot or drones with the visual language Scratch

3DViewer[edit]

3DViewer is a generic 3D viewer tool that allows to see a reconstruction of the 3D world in a pointcloud format.

This tool offers a viewer that shows a reconstruction of a 3D world captured by vision sensors as cameras or RGBD sensors. It is useful to use with 3D world mapping algorithms, showing the reconstruction given by it.

DetectionSuite[edit]

(Under construction)

This is a tool for DeepLearning applications. Its development has its own public repository

  • It accepts several image databases
  • It allows fast annotation of object detection databases
  • It shows statistics of neural network performance over images from several detection databases
  • It may run networks from several DeepLearning middlware (YOLO...)

Academy[edit]

This tool is a whole educational framework using ROS and JdeRobot to learn robotics and computer vision. It includes a collection of around twenty different exercises of robot programming and computer vision programming. It has its own web page and repository.

Basic component (C++)[edit]

This example shows images provided by a camera (real or simulated) through an user interface. The objective of the Basic Component is to teach new students how all the JdeRobot applications can be implemented and to introduce them to the basic structure of JdeRobot nodes. This is a component made to help us to understand both the internal structure and the communications diagram between the two existing threads: the GUI thread and the control thread.

The following video shows how this component works:

Structure

The basic structure of the basic component is based on two main threads (based on C++'s pthread library):


  1. Gui thread: this thread is responsible to show the images and refresh the user interface of the component.
  2. Processing and Control thread: this thread is responsible to manage the communications between the basic component and the cameraserver component through ICE interfaces. This thread also stores the image to be used later by the gui thread.


Both threads have an algorithm that controls the refresh of the update functions. These operations are called periodically, so each few seconds we will call an update function (both in control and gui thread) that refresh the state of our component. So in this case, we will refresh the information provided by the camera in order to obtain images in real time (control thread manages that), and show them in our user interface (gui thread manages that). This algorithm can be modified to fit the user's (or tool) preferences.

To make possible the communication between threads we have a shared memory area that is not more than one class that is responsible to store and offer all the shared resources which our component handles. In our case, the shared resource is the image provided by the camera (all in cv::Mat format - check the OpenCV documentation [1] -). The control thread requests one image each few milliseconds (which is obtained through its ICE interface) and stores it in the shared memory area, so few milliseconds later the Gui thread can take that image from the shared memory and shows it in the user interface. This shared memory class has a mutex in all their functions in order to avoid race conditions and troubles derived from multithreading. This is transparent to the other classes so it's more elegant and easy to understand. This is a little diagram that shows how the basic component is developed:


In this basic component the user interface is made in QT. This is the simplest alternative by the stated above. It has an IDE that allows programming of user interfaces intuitively dragging around a canvas, as the need to then integrate each component is self-generated code.

This tool uses following ICE interfaces:

Compilation

To make this work, we need to compile the whole component in the right way. As known, JdeRobot uses CMake to compile all their components, so the first thing we need to know is how to compile a c++ class like a library. It's pretty simple, because cmake allows us to create a dynamic library using the following directive:

add_library (lib SHARED source_code file1 file2 ... )

Where lib is the library that cmake will generate (.so file), source_code is the implementation of our shared library (plugin) and fileX are the files that have dependencies with our plugin (like moc_xx files in Qt, for instance).

To compile this component you can compile the whole JdeRobot project with cmake&make, or compile only this component as shown here (Compiling selected Tools section)

Execution

Note that to make this component run, you need a image provider (a camera) running and listening at the ports you want to bind and properly configured (through .cfg files) as explained [here]. So first you have to run a camera server and then run the Basic Component.

The way to make it run is the following:

1. Run the servers (we use Simulated Kobuki for this example):

$ gazebo kobuki-simple.world

2. Run the Basic Component tool:

$ basic_component basic_component.yml

Configuration file

You can check the configuration file here

Basic component py (Python)[edit]

This example node shows images provided by a camera (real or simulated) and allows us to send orders speed to motors of robots with wheels (real or simulated) through an user interface. The objective of the Basic Component is to teach new students how all the JdeRobot tools can be implemented and introduce them to the JdeRobot tools basic structure. This is a component made to help us to understand both the internal structure and the communications diagram between the two existing threads: the gui thread and the control thread.


Structure

The structure basic_component_py consists of 2 elements that provide interfaces ICE and stored data to send or receive (camera.py, mmotors.py) and two threads, one for the interface and one to upgrade sensors.

  1. camera.py: It provides the camera interface and allows you to store received images. (it's safe thread using threading.Lock)
  2. motors.py: It provides motors interface and allows you to store speeds to send. (it's safe thread using threading.Lock)
  3. Gui thread: this thread is responsible to show the images from camera.py, refresh the user interface of the component and send the orders speed to the server via motors.py.
  4. Sensor thread: This thread is responsible for requesting images to the server every so time and store them in camera.py.


Both threads have an algorithm that controls the refresh of the update functions. These operations are called periodically, so each few seconds we will call an update function (both in control and gui thread) that refresh the state of our component. So in this case, we will refresh the information provided by the camera in order to obtain images in real time (control thread manages that), and show them in our user interface (gui thread manages that). This algorithm can be modified to fit the user's (or tool) preferences.

This is a little diagram that shows how the basic component is developed:


The GUI is made up of two windows, one for the camera and another for control of motors.


This component uses two ICE interfaces:

Execution

Note that to make this component run, you need a image provider (a camera) and a motors driver.

The way to make it run is the following:

1. Run the servers (we use Simulated Kobuki for this example):

$ gazebo kobuki-simple.world

2. Run the Basic_Component_py tool:

basic_component_py basic_component_py.yml

Configuration file

You can check the configuration file here

Drivers[edit]

In this section, we will describe the main drivers distributed with JdeRobot that provide access to different sensors and actuators. They are also valuable examples for configuring the availables drivers, so we will describe how to use them and how they work.

Several Gazebo plugins act like drivers and they communicate a robot in Gazebo simulator with other software components in the JdeRobot platform. It's possible to retrieve information from several sensor devices, such us: laser, encoders, motors, camera, sonar, etc. And send information to the actuators of the robot (if it has them) through ICE interfaces.

Drones[edit]

ArDrone from Parrot[edit]

'ardrone_server' is a JdeRobot component for the control and access to sensors of ArDrone 1 and 2 from Parrot. It's inspired in the ardrone_autonomy packet of ROS (Fuerte).

It provides the following ICE Interfaces:

Configuration file

The driver offers six interfaces:

  1. cmd_vel, interface for velocity commands (forward, backward,...).
  2. navadata, interface for sensorial data transmission (altitude, velocities,...).
  3. ardrone_extra, interface for extra functions of ardrone (record video on USB,...) and basic maneuvers (takeoff,land and reset).
  4. Camera, standard interface for images transmission in JdeRobot
  5. remoteConfig, standard interface for XML file transmission in JdeRobot.
  6. Pose3D, standard interface for pose information (x,y,z,h and quaternion)

An example of configuration that you, most likely, will find by default is the following:

  • All interfaces: ardrone_interfaces.cfg
ArDrone.Camera.Endpoints=default -h 0.0.0.0 -p 9999
ArDrone.Camera.Name=ardrone_camera
ArDrone.Camera.FramerateN=15
ArDrone.Camera.FramerateD=1
ArDrone.Camera.Format=RGB8
ArDrone.Camera.ArDrone2.ImageWidth=640
ArDrone.Camera.ArDrone2.ImageHeight=360
ArDrone.Camera.ArDrone1.ImageWidth=320
ArDrone.Camera.ArDrone1.ImageHeight=240
# If you want a mirror image, set to 1
ArDrone.Camera.Mirror=0


ArDrone.Pose3D.Endpoints=default -h 0.0.0.0 -p 9998
ArDrone.Pose3D.Name=ardrone_pose3d

ArDrone.RemoteConfig.Endpoints=default -h 0.0.0.0 -p 9997
ArDrone.RemoteConfig.Name=ardrone_remoteConfig

ArDrone.Navdata.Endpoints=default -h 0.0.0.0 -p 9996
ArDrone.Navdata.Name=ardrone_navdata

ArDrone.CMDVel.Endpoints=default -h 0.0.0.0 -p 9995
ArDrone.CMDVel.Name=ardrone_cmdvel

ArDrone.Extra.Endpoints=default -h 0.0.0.0 -p 9994
ArDrone.Extra.Name=ardrone_extra

ArDrone.NavdataGPS.Endpoints=default -h 0.0.0.0 -p 9993
ArDrone.NavdataGPS.Name=ardrone_navdatagps

Where:

  • Endpoints indicates the address where our server will be listening to request.
  • Name is the name of the ICE interface.
  • ImageWidht & ImageHeight are the resolution of the images taken by the drone.
  • FramerateN are the frames per second taken by the drone.

ArDrone2 in Gazebo[edit]

'Quadrotor2' is the version 2.0 of quadrotor plugin. You can see a complete description of features and changes at:

Multiple instances quadrotor2 allows multiple models by only edit world files. Each spawn quadrotor must be named with an extra suffix -p<port number>. This port overrides defined at Ice.Confif file allowing even reuse of same config file.

Configuration file

Quadrotor.Adapter.Endpoints=default -h localhost -p 9000
Quadrotor.CMDVel.Name=CMDVel
Quadrotor.Navdata.Name=Navdata
Quadrotor.Extra.Name=Extra
Quadrotor.Camera.Name=Camera
Quadrotor.Pose3D.Name=Pose3D

Cameras[edit]

Cameraserver[edit]

'Cameraserver' is a component to serve N cameras, either real or simulated from a video file. It uses gstreamer internally to handle and to process the video sources.

It provides only one ICE interface

Configuration file

To use cameraserver we just have to edit the component's configuration to set the video sources and to set the served formats of our cameras. We also have to set the network address where our component will be listening to for new connections or to choose the locator service.

A configuration file example may be like this:

#network configuration
CameraSrv.Endpoints=default -h 127.0.0.1 -p 9999

#default service mode
CameraSrv.DefaultMode=1

#cameras configuration
CameraSrv.NCameras=1

#camera 0
CameraSrv.Camera.0.Name=cameraA
CameraSrv.Camera.0.ShortDescription=Camera pluged to /dev/video0
CameraSrv.Camera.0.Uri=0
CameraSrv.Camera.0.FramerateN=15
CameraSrv.Camera.0.FramerateD=1
CameraSrv.Camera.0.ImageWidth=320
CameraSrv.Camera.0.ImageHeight=240
CameraSrv.Camera.0.Format=RGB8
CameraSrv.Camera.0.Invert=False

The first paragraph define the network configuration, that is the address where our server will be listening to request. Next paragraph define the number of cameras our server will provide. And following it, we have the configuration for the cameras. Notice that each camera will have its parameters after the prefix CameraSrv.Camera.X., with X in the interval [0..NCameras). In this example we have only one camera. A camera has several parameters:

  • Name: Name use to serve this camera. The interface for this camera will have this name.
  • ShortDescription: A short description of this camera that may be used by the client to retrieve more information about the camera than only a name.
  • Uri: String that define the video source.
  • FramerateN: Frame rate numerator.
  • FramerateD: Frame rate denominator.
  • ImageWidth: Size of the served image.
  • ImageHeight: Size of the served image.
  • Format: A string defining the format of the served image. Cameraserver use libcolorspacesmm to manage the image formats. Currently accepted formats are RGB888 for RGB 24bits and YUY2.
  • Invert: A boolean value to indicate the image appearance. If you need to set the camera in an invert mode (maybe in a real robot), you just need to indicate it setting this parameter on True value.

Cameraserver can serve several types of sources. Each of them are named using the parameter uri with a syntax like:

type-of-source://source-descriptor

where type of source can be one of this:

  • RGB cameras. Source descriptor will name the device name, e.g. 0 -> /dev/video0
  • file: For video files. Source descriptor will name the file, e.g /home/user/file.avi
  • http or https: For files located in a web server. Source descriptor will name the remote resource, e.g http://webserver.com/file.avi


Cameraserver_py[edit]

'Cameraserver_py' is a component to serve a camera (only serves RGB8 at the moment). It uses Opencv internally to handle and to process the video sources.

It provides only one ICE interface

Configuration file

To use cameraserver_py we just have to edit the component's configuration to set the video source and FPS of our camera. We also have to set the network address where our component will be listening.

A configuration file example may be like this:

cameraServer:
  Proxy: "default -h 0.0.0.0 -p 9999"
  Uri: 0 #0 corresponds to /dev/video0, 1 to /dev/video1, and so on...
  FrameRate: 12 #FPS readed from device
  Name: cameraA
  • Name: Name use to serve this camera. The interface for this camera will have this name.
  • Uri: String that define the video source.
  • FrameRate: Frame rate.
  • Proxy: network configuration.

RGBD cameras (Xtion, Kinect...)[edit]

'OpenniServer' driver offers an entry point to connect depth sensors to JdeRobot tools through ICE interfaces. Nowadays, it is compatible with Kinect from Microsoft, and Xtion from ASUS (both RGBD sensors). It also allows the posibility to represent a points cloud from the images taken by the sensor.

It provides the following ICE Interfaces:

Configuration file

This driver uses two different interfaces which have to be configured in order to connect the adapters from ICE properly. To do so, we have several configuration files (*.cfg) that take care of this. An example of configuration that you most likely will find by default is the following:

openniServer.Endpoints=default -h 0.0.0.0 -p 9999
#with registry
#cameras configuration
openniServer.PlayerDetection=0
openniServer.Mode=0
openniServer.ImageRegistration=1
openniServer.Hz=20

NamingService.Enabled=0
NamingService.Proxy=NamingServiceJdeRobot:default -h 0.0.0.0 -p 10000

#mode=0 -> fps: 30x: 320y 240
#mode=2 -> fps: 60x: 320y 240
#mode=4 -> fps: 30x: 640y 480
#mode=6 -> fps: 25x: 320y 240
#mode=8 -> fps: 25x: 640y 480

#camera 1
openniServer.deviceId=0
openniServer.CameraRGB.Name=cameraA
openniServer.CameraRGB.Format=RGB8
openniServer.CameraRGB.fps=25
openniServer.CameraRGB.PlayerDetection=0
openniServer.CameraRGB.Mirror=0

#openniServer.calibration=camera-0.cfg
openniServer.CameraDEPTH.Name=cameraB
openniServer.CameraDEPTH.Format=DEPTH8_16
openniServer.CameraDEPTH.fps=10
openniServer.CameraDEPTH.PlayerDetection=0
openniServer.CameraDEPTH.Mirror=0

openniServer.PointCloud.Name=pointcloud1
openniServer.pointCloud.Fps=15

#Activation flags
openniServer.CameraRGB=1
openniServer.CameraIR=1
openniServer.CameraDEPTH=1
openniServer.pointCloudActive=0
openniServer.Pose3DMotorsActive=0
openniServer.KinectLedsActive=0
openniServer.ExtraCalibration=0
openniServer.Debug=1
openniServer.Fps=20


# Levels: 0(DEBUG), 1(INFO), 2(WARNING), 3(ERROR)

openniServer.Log.File.Name=./log/openniServer.txt
openniServer.Log.File.Level=0
openniServer.Log.Screen.Level=0

Where:

  • Endpoints indicates the address where our server will be listening to requests
  • DeviceID indicates the camera which will be connected
  • Name is the name of the ICE interface
  • Format is the expected image format from the sensor
  • Fps are the frames per second offered by the sensor
  • Activation flags block allows to activate or deactivate the different options from the sensors, even the very sensor.

Note that there are two blocks almost identical with this tags: openniServer.CameraRGB.XXX and openniServer.CameraDEPTH.XXX which means that each sensor (RGB camera and DEPTH camera) has to be configured separately.


Simulated RGB cameras in Gazebo[edit]

YouTubeServer[edit]

'YouTubeServer'


FlyingKinect2 in Gazebo[edit]

Version 2.0 of flyingKinect done from scratch.
It takes all design patterns and features of Quadrotor2 culminating in a new develop scheme:

Configuration file

Ice.MessageSizeMax=2097152
Kinect.Endpoints=default -h localhost -p 9997

Wheeled indoor robots[edit]

TurtleBot robot[edit]

'kobuki_driver' is a JdeRobot driver for the control and access to sensors of the TurtleBot robot from Yujin robot. It's inspired in the kobuki_driver from ROS (Groovy).

By now the turtlebot robot supports two ICE interfaces:

But you can attach some other devices to the robot like cameras, laser, mechanic arms, etc.

Configuration file

In order to use the kobuki_driver, you will need to configure the motors and encoders endpoints properly. Also you have to connect the robot via USB to your computer (laptop) and launch the driver as explained here

An example of configuration file is the following:

kobuki.Motors.Endpoints=default -h 0.0.0.0 -p 9999
kobuki.Pose3D.Endpoints=default -h 0.0.0.0 -p 9997

Where:

  • Endpoints indicates the address where our server will be listening to request.


Pioneer robot in Gazebo[edit]

This driver allows gazebo to load a simulated pioneer robot in Gazebo simulator, connecting all their interfaces through ICE and waiting for a tool to bind to them.

It provides the following ICE Interfaces

Configuration file

This driver uses seven different interfaces which have to be configured in order to connect the adapters from ICE properly. To do so, we have several configuration files (*.cfg) that take care of this. An example of configuration that you most likely will find by default is the following:

  • Left camera: cam_pioneer_left.cfg
CameraGazebo.Endpoints=default -h localhost -p 9995

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Right camera:cam_pioneer_right.cfg
CameraGazebo.Endpoints=default -h localhost -p 9994

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Motors: pioneer2dxMotors.cfg
Motors.Endpoints=default -h localhost -p 9999
  • Encoders: pioneer2dxEncoders.cfg
Encoders.Endpoints=default -h localhost -p 9997
  • Laser: pioneer2dx_laser.cfg
Laser.Endpoints=default -h localhost -p 9996
  • Pose3DEncoders & Pose3DMotors: pioneer2dx_pose3dencoders.cfg
Pose3DEncoders1.Endpoints=default -h localhost -p 9993
Pose3DEncoders2.Endpoints=default -h localhost -p 9992
Pose3DMotors1.Endpoints=default -h localhost -p 9991
Pose3DMotors2.Endpoints=default -h localhost -p 9990
  • Pose3D: pioneer2dxPose3d.cfg
Pose3D.Endpoints=default -h localhost -p 9989

Where:

  • Endpoints indicates the address where our server will be listening to request
  • Name (camera) is the name of the camera interface

With this configuration, you only have to know the name and the port of the endpoint in order to bind it with a JdeRobot tool. See Running jderobot pioneer to see how to launch an example of this.

TurtleBot robot in Gazebo[edit]

This driver allows gazebo to load a simulated turtlebot robot in Gazebo simulator, connecting all their interfaces through ICE and waiting for a tool to bind to them.

It provides the following ICE Interfaces:

Configuration file

This driver uses seven different interfaces which have to be configured in order to connect the adapters from ICE properly. To do so, we have several configuration files (*.cfg) that take care of this. An example of configuration that you most likely will find by default is the following:

  • Left camera: cam_turtlebot_left.cfg
CameraGazebo.Endpoints=default -h localhost -p 8995

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Right camera:cam_turtlebot_right.cfg
CameraGazebo.Endpoints=default -h localhost -p 8994

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Motors: turtlebotMotors.cfg
Motors.Endpoints=default -h localhost -p 8999
  • Encoders: turtlebotEncoders.cfg
Encoders.Endpoints=default -h localhost -p 8997
  • Laser: turtlebot_laser.cfg
Laser.Endpoints=default -h localhost -p 8996
  • Pose3DEncoders & Pose3DMotors: turtlebot_pose3dencoders.cfg
Pose3DEncoders1.Endpoints=default -h localhost -p 8993
Pose3DEncoders2.Endpoints=default -h localhost -p 8992
Pose3DMotors1.Endpoints=default -h localhost -p 8991
Pose3DMotors2.Endpoints=default -h localhost -p 8990
  • Pose3D: turtlebotPose3d.cfg
Pose3D.Endpoints=default -h localhost -p 8998

Where:

  • Endpoints indicates the address where our server will be listening to request
  • Name (camera) is the name of the camera interface

With this configuration, you only have to know the name and the port of the endpoint in order to bind it with a JdeRobot tool. See Running jderobot turtlebot to see how to launch an example of this.

Cars[edit]

Formula1 car in Gazebo[edit]

This driver allows gazebo to load a simulated formula1 robot in Gazebo simulator, connecting all their interfaces through ICE and waiting for a tool to bind to them.

It provides the following ICE Interfaces

Configuration file

This driver uses four different interfaces which have to be configured in order to connect the adapters from ICE properly. To do so, we have several configuration files (*.cfg) that take care of this. An example of configuration that you most likely will find by default is the following:

  • Left camera: cam_f1_left.cfg
CameraGazebo.Endpoints=default -h localhost -p 8995

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Right camera:cam_f1_right.cfg
CameraGazebo.Endpoints=default -h localhost -p 8994

#camera 1
CameraGazebo.Camera.0.Name=cameraA
CameraGazebo.Camera.0.ImageWidth=320
CameraGazebo.Camera.0.ImageHeight=240
CameraGazebo.Camera.0.Format=RGB8
  • Motors: f1Motors.cfg
Motors.Endpoints=default -h localhost -p 8999
  • Laser: f1_laser.cfg
Laser.Endpoints=default -h localhost -p 8996
  • Pose3D: f1Pose3D.cfg
Pose3D.Endpoints=default -h localhost -p 8998

Where:

  • Endpoints indicates the address where our server will be listening to request
  • Name (camera) is the name of the camera interface

With this configuration, you only have to know the name and the port of the endpoint in order to bind it with a JdeRobot tool.

Taxi in Gazebo[edit]

This driver allows gazebo to load a simulated taxi car in Gazebo simulator, connecting all their interfaces through ICE and waiting for a tool to bind to them.

Ir provides only one ICE Interface:

Configuration file

This driver uses only one interface which have to be configured in order to connect the adapter from ICE properly. To do so, we have a configuration file (*.cfg) that take care of this. An example of configuration that you most likely will find by default is the following:

  • Motors: carMotors.cfg
Motors.Endpoints=default -h localhost -p 8999
  • Endpoints indicates the address where our server will be listening to request

With this configuration, you only have to know the name and the port of the endpoint in order to bind it with a JdeRobot tool.



Laser sensors[edit]

Hokuyo Laser[edit]

'Laser_server' is a component to serve distances measured with Hokuyo laser (The maximum distance measuring is 5.6 meters).

It provides the following ICE Interface:

Configuration file

To use cameraserver we just have to edit the component's configuration to set the network address where our component will be listening, the DeviceID, the angles (in degrees) minimum and maximum of measurements and the clustering. A configuration file example may be like this:

Laser.Endpoints=default -h 0.0.0.0 -p 9998

#Specifies laser type, this currently only works with hokuyo
Laser.Model=hokuyo

#0 corresponds to /dev/ttyACM0, 1 to /dev/ttyACM1, and so on...
Laser.DeviceId=0

#Indicates the beginning and end of the capture of the laser in degrees (0 for the front)
Laser.MinAng=-90
Laser.MaxAng=90

Laser.FaceUp=1

#Number of adjascent ranges to be clustered into a single measurement.
#0 -> 513 ranges in measurement
#3 -> 171 ranges in measurement
#...
Laser.Clustering=0
  • Endpoints: network address where our component will be listening
  • Model: indicates the model of laser for future extensions. Not change
  • DeviceId: indicates the id of the sensor (/dev/ttyACMx)
  • MinAng and MaxAng: Indicates the beginning and end of the capture of the laser in degrees (0 for the front)
  • Clustering: Number of adjascent ranges to be clustered into a single measurement
  • FaceUP: indicates if laser is face up or not

Execution To run the driver, first we must read and write permissions for all users on the device:

sudo chmod 666 /dev/ttyACM0

Once done, run the driver:

laser_server --Ice.Config=laser_server.cfg


Standardized interfaces[edit]

Configuration files[edit]

Usually, the robotic applications in JdeRobot are composed of several concurrent components, also known as nodes. In order to run a component the execution syntax is:

$ Component configuration_file.cfg

Each node requires a configuration file with the format:

...
...
myComponent.X.Y = Z
...
...

This format is better defined in ICE middleware documentation.

Some entries are required, like those defining the component address, if we are connecting our components directly or the locator service if we are using IceGrid. And others are optional, that can define the behavior of our component in some cases. Each component have a set of configuration entries to set its specific parts. A common configuration file may be like this (cameraserver example):

CameraSrv.Endpoints=default -h 0.0.0.0 -p 9999

The above configuration file includes ALL the information necessary to make the component accesible through the network. Typically, the configuration from the server side needs an endpoint (address + port) in which it will be listening petitions for connection. On the other hand, the client side needs to know both address and port. Due to we use ICE to manage the network layer, there are to set specific interfaces in order to locate the resource we want to connect to. So in the example shown above, we have an endpoint which includes:

  • Address: 0.0.0.0 or localhost
  • Port: 9999
  • Interface name: cameraA

only with this three values, we can create a configuration for the client (following the example):

Cameraview.Camera.Proxy=cameraA:default -h 0.0.0.0 -p 9999

In which we have set the same values that the server offers:

  • Address: 0.0.0.0 or localhost (same as server)
  • Port: 9999 (same as server)
  • Interface name: cameraA (same as server)

And the format should be:

  • client side:
Whatever.Label.You.Want.Proxy=Interface_name:protocol -h address -p port
  • server side:
Whatever.Label.You.Want.Endpoints=protocol -h address -p port

Typically the label we use is a significative name that allows the user to recognize the component it was made for at first sight, the name of the component followed by the interface type (camera, pose3d, motors,etc) is a good idea as a name for the label. So in cameraserver (server side) we use CameraSrv.Endpoints and in cameraview (client side) we use Cameraview.Camera.Proxy.

You can see some configurations in the configuration files of the following examples.