Tutorials

From jderobot
Jump to: navigation, search

Here you can find examples about how to run different JdeRobot components and their configurations.

VisualStates tool[edit]

TurtleBot Example with Comm ICE interface[edit]

Now, we are going to run an example for VisualStates. For that, first we must generate the automata code, so we go to the visualStates directory and run the program. In my case:

visualStates_py

Here, you would create your own automata, bur for this case, we are just using an example that I already have created, so we click in the menu the botton Open inside archive, and we select the file src/examples/visualStates/jderobot_comm/ice/obstacleAvoidancePython/obstacleAvoidancePython.xml from examples of JdeRobot. Once we have opened it,we just save it and click on actions/generate python code, and it will generate the files obstacleAvoidancePython.py and obstacleAvoidancePython.yml in the directory where we saved it. If it was a C++ example, it would have been the same.

Once we have the executable ready, we have to launch gazebo world:

gazebo kobuki-simple.world

Then, in another shell, we go to the directory where we have created our automata, in my case:

cd src/examples/visualStates/jderobot_comm/ice/obstacleAvoidancePython

And we just launch it indicating that we want also to display the runtime GUI:

./obstacleAvoidancePython.py obstacleAvoidancePython.yml --displaygui=true

And then, we should have something similar to this:

TurtleBot Example with ROS[edit]

Now, we are going to run an example for VisualStates. For that, first we must generate the automata code, so we go to make a directory to save the automata:

mkdir obstacleAvoidancePython && cd obstacleAvoidancePython
cp /opt/jderobot/share/visualstates_py/examples/ros/obstacleAvoidancePython/obstacleAvoidancePython.xml .


We click in the menu the botton Open inside archive, and we select the file obstacleAvoidancePython.xml. Once we have opened it, we just save it and click on actions/generate python code, and it will generate the files obstacleAvoidancePython.py and other files that required to be a ROS package in the directory where we saved it. If it was a C++ example, it would have been the same.

Once we generated our package. We first copy the folder of example to our catkin workspace (you are already supposed to have a ROS catkin workspace). After copying the package, we need to compile our ROS package:

cp -r obstacleAvoidancePython catkin_ws/src/
cd catkin_ws
catkin_make
source devel/setup.bash

We need install rosbash package:

sudo apt install rosbash

We need to run our ros master in a terminal:

roscore

We need to run our gazebo simulator, in other terminal, using gazebo_ros package as follows:

rosrun gazebo_ros gazebo kobuki-simple-ros.world

Finally, we can run our generated package:

rosrun obstacleAvoidancePython obstacleAvoidancePython.py --displaygui=true

You can also see the example in action there:

ArDrone Example with Comm ICE interface[edit]

Now, we are going to run an example for VisualStates. For that, first we must generate the automata code, so we go to the visualStates directory and run the program. In my case:

cd ~/JdeRobot/src/tools/visualStates_py/
./visualStates.py

Here, you would create your own automata, bur for this case, we are just using an example that I already have created, so we click in the menu the button Open inside file, and we select the file src/examples/visualStates/jderobot_comm/ice/ArDrone/ArDrone.xml from examples of JdeRobot. Once we have opened it,we just save it and click on actions/generate python code, and it will generate the files ArDrone.py and ArDrone.yml in the directory where we saved it. If it was a C++ example, it would have been the same.

Once we have the executable ready, we have to launch gazebo world:

gazebo ArDrone.world

Then, in another shell, we go to the directory where we have created our automata, in my case:

cd src/examples/visualStates/jderobot_comm/ice/ArDrone

And we just launch it indicating that we also want to display the runtime GUI:

./ArDrone.py ArDrone.yml --displaygui=true

And then, we should have something similar to this:

Scratch4Robots tool[edit]

Drone Cat and Mouse Example[edit]

We are going to use a Scratch project already prepared called test_cat_mouse_2.sb2
First of all we have to generate our python code with:

~/JdeRobot/src/tools/scratch2jderobot/scripts $ scratch2python test_cat_mouse_2.sb2

This will generate a python executable that will be saved in the directory ~/JdeRobot/src/tools/scratch2jderobot/src/scratch2jderobot

Now we need a simulated scenario where run our example, in other terminal:

~ $ gazebo gato_raton_1_transparent.world

To execute our code we need a configuration file. For this example we use drone.yml.
All configuration files must be in their corresponding directory: scratch2jderobot/cfg

~/JdeRobot/src/tools/scratch2jderobot/src/scratch2jderobot $ test_cat_mouse_2.py drone.yml

Our drone is now searching for something red to follow.


Finally we should have something like this.



Scratch4Robots with kobuki[edit]

This example is ready for work directly with ROS

Install the tool and all his requirement[edit]

follow this guide:

   https://jderobot.org/Scratch4Robots#Installing

Make the translation from Scratch to python[edit]

   cd Scratch4Robots/examples/robot_example/src
   ./scratch2python.py robot_example_2.sb2

Launch the simulated world[edit]

In other terminal run:

   roslaunch kobuki_gazebo kobuki_empty_world.launch --screen

Execute the generated code[edit]

   cd Scratch4Robots/examples/robot_example/src
   ./robot_example_2.py ../cfg/robot_ros.yml

Only with catikin workspace[edit]

You have an example here:

Cameras[edit]

Camserver + CamViz[edit]

One of the mosts used configuration to try JdeRobot is to use camserver + camViz. If you have already installed JdeRobot go to Run Example

Install packages[edit]

  • Add the lastest ROS sources:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
  • Add the lastest zeroc-ice sources:
sudo apt-add-repository "deb http://zeroc.com/download/apt/ubuntu$(lsb_release -rs) stable main"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 5E6DA83306132997
  • Add JdeRobot repository:
sudo sh -c 'echo "deb [arch=amd64] http://jderobot.org/apt $(lsb_release -sc) main" > /etc/apt/sources.list.d/jderobot.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 24E521A4
  • Update the repositories
sudo apt update
  • Install Packages:
sudo apt install jderobot-camserver jderobot-camviz

Run Example[edit]

  • Run camserver first in one terminal
camserver camserver.cfg
  • Run cameraview in a different terminal
camViz camViz.yml

Configuration files by default are ready to run these components without changes. But you may pay attention to the interface you want to connect to, so make sure the name, IP and port are the same as camserver configuration file. You can see an example running camserver+camViz in the next video (emphasizing config files):

Camserver python + CamViz[edit]

One of the mosts used configuration to try JdeRobot is to use camserver_py + camViz. Make sure you have configured camserver_py properly.

  • Change the server that you are receiving images from in the yml file of camViz.
sudo nano /opt/jderobot/share/jderobot/conf/camViz.yml

Change "Ice" for "ROS", due to camserver_py is a full ROS based component.

  • Instance a ROS master in one terminal
roscore
  • Run camserver_py first in another terminal
camserver_py camserver_py.yml
  • Run cameraview in a different terminal
camViz camViz.yml

Configuration files by default are ready to run these components without changes. But you may pay attention to the interface you want to connect to, so make sure the name, IP and port are the same as camserver_py configuration file. You can see an example running camserver_py+camViz in the next video (emphasizing config files):

Camserver + Opencvdemo[edit]

  • Run camserver first in one terminal
camserver camserver.cfg
  • Run opencvdemo in a different terminal
opencvdemo opencvdemo.cfg

Configuration files by default are ready to run these components without changes. But you may pay attention to the interface you want to connect to, so make sure the name, IP and port are the same as camserver configuration file. You can see an example running camserver+opencvdemo in the next video (emphasizing config files):

Camserver + ColorTuner[edit]

One of the mosts used configuration to try JdeRobot is to use cameraserver + colorTuner. Make sure you have configured cameraserver properly.

  • Run camserver first in one terminal
camserver camserver.cfg
  • Run colorTuner in a different terminal
colorTuner colorTuner_py.yml

Configuration files by default are ready to run these components without changes. But you may pay attention to the interface you want to connect to, so make sure the name, IP and port are the same as camserver configuration file. You can see an example running camserver+colorTuner in the next video (emphasizing config files):

Cameraserver-web + Cameraview-web[edit]

Install packages[edit]

  • Install NodeJS and the package manage (npm):
sudo apt-get install nodejs
sudo apt-get install npm
  • Install Ros Kinetic:
sudo apt-get install ros-kinetic-desktop-full
  • Install rosbridge_server:
sudo apt-get install ros-kinetic-rosbridge-server
  • Install Electron, JQuery and js-yaml (read a yml files) in CameraServer-web and CameraView-web:

In a terminal, move to the path of each one and run this command (it will must install all):

npm install

Run Example[edit]

  • Run rosbridge-server for websocket first in one terminal
roslaunch rosbridge_server rosbridge_websocket.launch
  • Run cameraserver_web in a different terminal (move to its path):
npm start
  • Run cameraview_web in a different terminal (move to its path):
npm start

You can change the configuration of the connection with Configuration files or in the configuration menu of the application. To change the configuration of the rosbridge-server, you have to move to path where it's been installed (../ros/kinetic/share), and modify rosbridge_websocket.launch file.

You can see an example running in the next video:

Ros usb_cam + Cameraview-web[edit]

Install packages[edit]

  • Install NodeJS and the package manage (npm):
sudo apt-get install nodejs
sudo apt-get install npm
  • Install Ros Kinetic:
sudo apt-get install ros-kinetic-desktop-full
  • Install rosbridge_server:
sudo apt-get install ros-kinetic-rosbridge-server
  • Install Ros usb_cam:
sudo apt-get install ros-kinetic-usb-cam
  • Install Electron, JQuery and js-yaml (read a yml files) in CameraView-web:

In a terminal, move to the path and run this command (it will must install all):

npm install

Run Example[edit]

  • Run rosbridge-server for websocket first in one terminal
roslaunch rosbridge_server rosbridge_websocket.launch
  • Run Ros usb_cam in a different terminal:
roslaunch usb_cam usb_cam-test.launch
  • Run cameraview_web in a different terminal (move to its path):
npm start

You can change the configuration of the connection with Configuration files or in the configuration menu of the application. To change the configuration of the rosbridge-server or usb_cam, you have to move to path where they've been installed (../ros/kinetic/share), and modify rosbridge_websocket.launch or usb_cam-test.launch file.

You can see an example running in the next video:

Depth sensors[edit]

Kinect1 + ROS driver + RGBDViewer tool[edit]

The components combo freeenct ROS+RGBDViewer are meant to obtain information and translate it from RGBD sensors like Microsoft Kinect. In JdeRobot we use those components to obtain images and process them. In order to run this example, you must:

  • Install ros-kinetic-freenect-launch debian package
sudo apt-get install ros-kinetic-freenect-launch
  • Connect the RGBD sensor in an exclusive USB port (the port must not be shared).
  • Once connected, run the freeenct ROS driver:
roslaunch freenect_launch freenect.launch


  • Launch RGBDViewer tool
wget https://github.com/JdeRobot/JdeRobot/raw/master/src/tools/rgbdViewer/freenect_ros.yml
rgbdViewer freenect_ros.yml

freenect_ros.yml content is the following:

rgbdViewer:
  CameraRGB:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraA:tcp -h localhost -p 9998"
    Format: RGB8
    Topic: "/camera/rgb/image_raw"
    Name: cameraA
    Fps: 30

  CameraDEPTH:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraB:tcp -h localhost -p 9998"
    Format: RGB8
    Topic: "/camera/depth_registered/sw_registered/image_rect"
    Name: cameraB
    Fps: 30

  PointCloud:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "pointcloud1:tcp -h localhost -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: pointcloud
    Fps: 30

  RGBD:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "rgbd1:tcp -h localhost -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: RGBD
    Fps: 30

  Pose3DMotors:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Pose3DMotors1:tcp -h 193.147.14.20 -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: Pose3DMotors

  KinectLeds:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "kinectleds1:tcp -h 193.147.14.20 -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: KinectLeds


  NodeName: rgbdViewer
  Width: 640
  Height: 480
  Fps: 15
  Debug: 1

And that's it. You should be looking at the images offered by the sensor in the RGBDViewer component, as in the following video:


Asus Xtion sensor + OpenniServer driver + RGBDViewer tool[edit]

The components combo OpenniServer+RGBDViewer are meant to obtain information and translate it from RGBD sensors like Microsoft Kinect or Asus Xtion. In JdeRobot we use those components to obtain images and process them. In order to run this example, you must:

  • Connect the RGBD sensor in an exclusive USB port (the port must not be shared).
  • Once connected, run the openniServer:
openniServer openniServer.cfg

It probably show a timeout warning, but if it does not connect at the first time, keep trying till it connects properly. You will know it properly connected when it shows the following output:

Starting thread for camera: cameraA
              -------- openniServer: Component: CameraRGB created successfully(default -h 0.0.0.0 -p 9999@cameraA
Creating camera cameraB
Starting thread for camera: cameraB
              -------- openniServer: Component: CameraDEPTH created successfully(default -h 0.0.0.0 -p 9999@cameraB
  • Launch RGBDViewer
rgbdViewer rgbdViewer.yml

And that's it. You shoul be looking at the images offered by the sensor in the RGBDViewer component, at the following video shows:


Simulated FlyingKinect + NavigatorCamera + RGBDViewer[edit]

FlyingKinects are designed to make the sensors move through a specific scene in Gazebo. To archieve this, each sensor includes a pose3D interfaces that allow it to move in the 3D space. To run an example, we have a Gazebo world in which is included a house with some people moving. In this world, we can also have as many flyingKinects as we want but this example only includes two of them. You only have to type the following in one terminal:

gazebo Actors_GrannyAnnie2.world

this will startup Gazebo with the house model and two flyingKinects. In order to teleoperate both sensors we do have a specific component called navigatorCamera, that allows all the possible movements of the sensor position. In another terminal type:

navigatorCamera navigatorCamera.cfg

The following video shows the execution of this example.


Speed control of flyingKinect

This feature allows the camera to be controlled with a canvas in which we can set the speed it is going to move. The control of the component sends an increment of the current position each 20ms to make the camera move as fast as it was told to do in the canvas. It uses the pose3D interface (the same and unique connection) to do that task, so it is not necessary to open a new socket to do this. The following video shows how it works in an empty world.


Kinect2 + ROS driver + RGBDViewer tool[edit]

NOTE: Doesn't work yet

The components combo freeenct2 ROS+RGBDViewer are meant to obtain information and translate it from RGBD sensors like Microsoft Kinect. In JdeRobot we use those components to obtain images and process them. In order to run this example, you must:

  • Install jderobot-libfreenect2-dev debian package
sudo apt-get install jderobot-libfreenect2-dev
  • Compile the driver in your catkin workspace:
cd $YOUR_CATKIN_WORKSPACE_PATH/src
git clone https://github.com/code-iai/iai_kinect2.git
cd iai_kinect2
rosdep install -r --from-paths .

cd ~/YOUR_CATKIN_WORKSPACE_PATH
catkin_make -DCMAKE_BUILD_TYPE="Release"
  • Connect the RGBD sensor in an exclusive USB 3.0 port (the port must not be shared).
  • Once connected, run the kinect2_bridge ROS driver:
source devel/setup.bash
roslaunch kinect2_bridge kinect2_bridge.launch


  • Launch RGBDViewer tool
rgbdViewer rgbdViewerKinect2.yml

rgbdViewerKinect2.yml content is the following:

rgbdViewer:
  CameraRGB:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraA:tcp -h localhost -p 9998"
    Format: RGB8
    Topic: "/kinect2/hd/image_color_rect"
    Name: cameraA
    Fps: 30

  CameraDEPTH:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "cameraB:tcp -h localhost -p 9998"
    Format: RGB8
    Topic: "/kinect2/hd/image_depth_rect"
    Name: cameraB
    Fps: 30

  PointCloud:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "pointcloud1:tcp -h localhost -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: pointcloud
    Fps: 30

  RGBD:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "rgbd1:tcp -h localhost -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: RGBD
    Fps: 30

  Pose3DMotors:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Pose3DMotors1:tcp -h 193.147.14.20 -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: Pose3DMotors

  KinectLeds:
    Server: 0 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "kinectleds1:tcp -h 193.147.14.20 -p 9999"
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: KinectLeds


  NodeName: rgbdViewer
  Width: 640
  Height: 480
  Fps: 15
  Debug: 1

And that's it. You should be looking at the images offered by the sensor in the RGBDViewer component, as in the following video:


Wheeled indoor robots[edit]

Simulated TurtleBot + KobukiViewer[edit]

The KobukiViewer component was made to teleoperate wheeled robots like the pioneer robot and the kobuki (turtlebot) robot. This component offers a really simple GUI that allows the user to teleoperate the robots, plus three checkbuttons to show the sensors information (laser, camera and pose3D). To run this example, you must:


  • add repositories:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key 67170598AF249743
sudo apt-add-repository "deb http://zeroc.com/download/apt/ubuntu$(lsb_release -rs) stable main"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 5E6DA83306132997
sudo sh -c 'echo "deb [arch=amd64] http://jderobot.org/apt xenial main" > /etc/apt/sources.list.d/jderobot.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 24E521A4


  • Install the following packages:
sudo apt install ros-kinetic-kobuki-gazebo jderobot-gazeboserver jderobot-kobukiviewer


  • Run gazebo with the Kobuki (turtlebot) world throught a ROS launch file.
roslaunch /opt/jderobot/share/jderobot/launch/kobuki-simple-ros.launch
  • Run kobukiViewer to teleoperate the robot.
kobukiViewer kobukiViewer.yml

The configuration file should be ready for running, nevertheless you should check the following if you want to run pioneer or turtlebot. The configuration file for kobukiViewer should looks like this:

kobukiViewer:
  Motors:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Motors:default -h localhost -p 9001"
    Topic: "/turtlebotROS/mobile_base/commands/velocity"
    Name: kobukiViewerMotors
    maxV: 3
    maxW: 0.7

  Camera1:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "CameraL:default -h localhost -p 9001"
    Format: RGB8
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: kobukiViewerCamera1

  Camera2:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "CameraR:default -h localhost -p 9001"
    Format: RGB8
    Topic: "/TurtlebotROS/cameraR/image_raw"
    Name: kobukiViewerCamera2

  Pose3D:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Pose3D:default -h localhost -p 9001"
    Topic: "//turtlebotROS/odom"
    Name: kobukiViewerPose3d

  Laser:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Laser:default -h localhost -p 9001"
    Topic: "/turtlebotROS/laser/scan"
    Name: kobukiViewerLaser

  Vmax: 3
  Wmax: 0.7
  NodeName: kobukiViewer

Notice that the camera configuration says Camera{L,R}.



Here is a video showing how this example works.

Simulated TurtleBot + KobukiViewer-web[edit]

The KobukiViewerJS component was made to teleoperate wheeled robots like kobuki (turtlebot) robot from web browser. To run this example, you must:

  • Run gazebo with the Kobuki (turtlebot) world.
gazebo kobuki-simple.world
  • Run kobukiViewerJS to teleoperate the robot.
kobukiviewerjs
  • Put the following in a web browser:
http://localhost:7777

To configurate the tool, press config button and put following configurarion:

Here is a video showing how this example works.

Real TurtleBot + ROS + KobukiViewer[edit]

You can run a ROS driver made specifically to control the Kobuki robot by Yujin. To teleoperate the real kobuki (turtlebot) you must have installed the jderobot or jderobot-deps-dev package. Then must setup the environment in order to launch this example in a distributed way:

  • add repositories:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key 67170598AF249743
sudo apt-add-repository "deb http://zeroc.com/download/apt/ubuntu$(lsb_release -rs) stable main"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 5E6DA83306132997
sudo sh -c 'echo "deb [arch=amd64] http://jderobot.org/apt xenial main" > /etc/apt/sources.list.d/jderobot.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 24E521A4
  • first install the following packages:
sudo apt install ros-kinetic-rplidar-ros ros-kinetic-kobuki-node ros-kinetic-urg-node ros-kinetic-laser-filters

and that's it for the kobuki Pc.

  • Then, if if you are not already in the dialout group:
    sudo usermod -a -G dialout $USER 
  • First, connect laser (Hokuyo has 2 wires), then turn on turtlebot and plug it.
  • Add permissions to laser:
sudo chmod 777 /dev/ttyACM0
  • To launch the driver:
  • If your Turtlebot has a Hokuyo laser use:
    roslaunch turtlebot-hokuyo.launch
  • If your Turtlebot has a rplidar laser use:
    roslaunch turtlebot-rplidar.launch
  • Now you can launch the kobukiViewer tool:
kobukiViewer kobukiViewerReal.yml

making sure that the configuration is correct. In order to teleoperate the robot, your kobukiViewer.yml file should look like this one:

kobukiViewer:
  Motors:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Motors:default -h localhost -p 9999"
    Topic: "/mobile_base/commands/velocity"
    Name: kobukiViewerMotors
    maxV: 3
    maxW: 0.7

  Camera1:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "CameraL:default -h localhost -p 9001"
    Format: RGB8
    Topic: "/camera/rgb/image_raw"
    Name: kobukiViewerCamera1

  Camera2:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "CameraR:default -h localhost -p 9001"
    Format: RGB8
    Topic: "/camera/rgb/image_raw"
    Name: kobukiViewerCamera2

  Pose3D:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Pose3D:default -h localhost -p 9001"
    Topic: "/odom"
    Name: kobukiViewerPose3d

  Laser:
    Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Laser:default -h localhost -p 9001"
    Topic: "/scan"
    Name: kobukiViewerLaser
  Vmax: 3
  Wmax: 0.7
  NodeName: kobukiViewer


And that's it. Here is a video showing how this example works:


Drones[edit]

Simulated ArDrone + UAVViewer[edit]

The uav_viewer component is intended to teleoperate the Parrot ArDrone. This component offers several GUI components in order to control the drone and see all the information from their sensors. To run this example follow this steps:

  • Run gazebo with an ArDrone world.
gazebo ArDrone.world
  • Run uav_viewer
uav_viewer_py uav_viewer_py.yml

The configuration files are ready to run the example, but make sure you have all the interfaces well configured first.

Here you can see a video showing how to run this example:

Real ArDrone + Ardrone_server + Uav_viewer[edit]

There is a for the ArDrone2 from Parrot made in JdeRobot: ardrone_server. In order to compile it you must have installed the jderobot or jderobot-deps-dev package, so it will install the ardronelib library neede to compile ardrone_server. Once you have the project compiled, you must:

  • Connect the battery of the ArDrone to the robot. It will create its own Wi-Fi network called Ardrone-XXXX. Here's a video showing how to connect the battery to the drone
  • Connect your laptop to the new Wi-Fi network created by the drone.
  • Once connected, run ardrone_server:
ardrone_server ardrone_interfaces.cfg

It should show some information (like the battery level) of the drone once connected.

  • Launch uav_viewer
uav_viewer_py uav_viewer_py_real.yml

And control the drone using the gui:

Here's a video showing how to execute this example:

3DRSoloDrone + MAVLinkServer + Uav_viewer_py[edit]

There is a for the SoloDrone from 3DR made in JdeRobot: MAVLinkServer.

To run this example, you must:

  • Connect the battery of the 3DRSoloDrone to the robot and turn on. It will create its own Wi-Fi network called SoloDrone-XXXX.
  • Connect your laptop to the new Wi-Fi network created by the drone.
  • Once connected, run MAVLinkServer:
MAVLinkServer.sh mavlinkserver.yml

Making sure that the configuration is correct. In order to teleoperate the robot, your mavlinkserver.yml file should look like this one:

Camera:
  Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
  Proxy: "default -h 0.0.0.0 -p 9999"
  Format: RGB8
  Topic: "/MavLink/image_raw"
  Name: MavLinkCamera

Pose3D:
  Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
  Proxy: "default -h 0.0.0.0 -p 9998"
  Topic: "/MavLink/Pose3D"
  Name: MavLinkPose3d

CMDVel:
  Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
  Proxy: "default -h 0.0.0.0 -p 9997"
  Topic: "/MavLink/CMDVel"
  Name: MavLinkCMDVel

Navdata:
  Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
  Proxy: "default -h 0.0.0.0 -p 9996"
  Topic: "/MavLink/Navdata"
  Name: MavLinkNavdata

Extra:
  Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
  Proxy: "default -h 0.0.0.0 -p 9995"
  Topic: "/MavLink/Extra"
  Name: MavLinkExtra

It should show some information of the drone once connected.

  • Launch uav_viewer_py
uav_viewer_py uav_viewer_py.yml

Making sure that the configuration is correct. In order to teleoperate the robot, your mavlinkserver.yml file should look like this one:

UAVViewer:
  Camera:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Camera:default -h localhost -p 9999"
    Format: RGB8
    Topic: "/IntrorobROS/image_raw"
    Name: UAVViewerCamera

  Pose3D:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Pose3D:default -h localhost -p 9998"
    Topic: "/IntrorobROS/Pose3D"
    Name: UAVViewerPose3d

  CMDVel:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "CMDVel:tcp -h localhost -p 9997"
    Topic: "/IntrorobROS/CMDVel"
    Name: UAVViewerCMDVel

  Navdata:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Navdata:tcp -h localhost -p 9996"
    Topic: "/IntrorobROS/Navdata"
    Name: UAVViewerNavdata

  Extra:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "Extra:tcp -h localhost -p 9995"
    Topic: "/IntrorobROS/Extra"
    Name: UAVViewerExtra

  Xmax: 10
  Ymax: 10
  Zmax: 5
  Yawmax: 1

NodeName: UAVViewer

Here's a video showing how this example works:


Videos from YouTube[edit]

This is a example how the YouTubeServer driver works.You will need a JdeRobot component be a component capable of displaying images, uav_viewer or colorturner for example For run this component first you must change the configuration file. This properties have to correspond with the jderobot component that we use to show the video,also you have to configure if you are going to use a live event or a YouTube video as a video input. In this example colortuner will be used. Here is how both configuration files should look in order to make this example works (the config files are located at /opt/jderobot/share/jderobot/conf)

##Config.yml

youtubeServer:
  ImageSrv:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS (Not supported)
    Proxy: "default -h 0.0.0.0 -p 9999"
    URL: "https://www.youtube.com/watch?v=zw47_q9wbBE"
    LiveBroadcast: False #True for youtube live events
    OutputDir: "/tmp/" #where do you want to store temporal files. INCLUDE THE FINAL /
    FPS: 24       #framerate to serve images
    Format: 18    #youtube-dl format to NOT streaming video download. Run: 'youtube-dl --list-formats [URL]'  to see available video formats
    Name: youtubeServer_py

NodeName: youtubeServerCfg


#
#ColorTurner config
#

ColorTuner:
  Camera:
    Server: 1 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
    Proxy: "youtubeServer:tcp -h localhost -p 9999"
    Format: RGB8
    Topic: "/TurtlebotROS/cameraL/image_raw"
    Name: cameraA
    Fps: 30

  NodeName: ColorTuner


To run it, just open a terminal and type:

youtubeserver youtubeserver.yml

then in another terminal, you just type:

colorTuner colorTuner_py.yml

Here you have a little video showing how to run it.

You can see examples

OpencvDemo[edit]

This component implements some of the operations provided by openCV library.

They are related to image processing filters and feature detection algorithms. The performance of each filter can be modified by adjusting the parameters directly in the GUI. These operations are:

  • Image Processing-> Gray scale, Color, Sobel and Laplace filters, Multiresolution Pyramid and Convolutions.
  • Feature Extraction-> Harris Corners, Canny Edge Detector and Hough Transform both for lines and circles.
  • Movement Detection-> Optical Flow Detector.

This tool only has one thread who cares about getting images and showing them throug the GUI. OpenCVDemo connects the ICE interface to a camera server (real or simulated) and takes images from it and each image is shown through an GUI class (called viewer') twice.

This tool has an advanced user interface made with GTK libraries. The GUI consists in two image frames, where the images taken from the camera will be shown (input imgae and output image with the filters applied). Also, it includes different kind of GUI elements to apply multiple effects to the input image as:

  • Grayscale
  • Sobel filter
  • Laplace filter
  • Pyramid filter
  • Color filter
  • Convolution with different effects.

All of these effects can be configured with the rest of the elements of the GUI, so we can apply complex filters to the input image.

This tool uses only one ICE Interface:

Camera

Execution

Note that to make this component run, you need a image provider (a camera) running and listening at the ports you want to bind and properly configured (through .cfg files) as explained [here]. So first you have to run a camera server and then run the opencvdemo tool.

The way to make it run is the following:

1. Run a camera server (we use CameraServer tool for this example):

cameraserver cameraserver.cfg

2. Run the opencvdemo tool

opencvdemo opencvdemo.cfg

Configuration file

You can also check the configuration file here

To configure this component, you just have to specify where the images come from in the "opencvdemo.cfg" file. Here, there is a configuration example to load the images of the laptop's webcam (loading the CameraServer component):

Opencvdemo.Camera.Proxy=cameraA:tcp -h 127.0.0.1 -p 9999

Where "127.0.0.1" is the IP address of the device (local IP in the example) and "9999" is the port number that must coincide with the port of the service provider(cameraserver, gazebosever, etc).