Flperez-tfm

From jderobot
Jump to: navigation, search


Visual_markers vs SD_SLAM[edit]

This section will contained all the information to compare both algorithm to located the robot. We use various robots in a scene with a map for visual_markers and the same scene to SD_SLAM. To install SD_SLAM I've followed the steps from the repository [1] and to install visual_markers the steps of this wiki.

The process will be the next:

1. Create the map (gazebo or real).

2. Use a robot and teleoperate recording the video (calibrate the camera if it's necessary). To record the video, I'll use rosbags.

   rosbag record <topic_name>
   rosbag play <name_record.bag>

3. Compare the results of both algorithms.

Rosbag laboratory[edit]

1. SD-SLAM:

2. Visual markers

Compare 3d[edit]

GUI[edit]

To view the information, I have done a simply gui which plots real time data. With checkpoints you can choose the kind of data or save all the information with the bottom. Some pictures to explain it: [insertar]

World gazebo[edit]

To create a world with ICE, you have to use the plugin which is called 'turtlebotplugin'. If you want to publish the information with ROS, you have to use the model turtlebotROS.


Experiment 1: SimpleWorld[edit]

For this experiment, I have modified the world: simple-kobuki.world to publish the information with ROS and I have included some markers to localizate the robot.

Result[edit]

Visual_markers[edit]

How to use it[edit]

Installation[edit]

This component needs Aruco's library and JdeRobot. - I have followed the instructions from [2] to install Aruco and [3] to download (version 2.0.19) - To install JdeRobot the instructions from Installation page. If you have installed JdeRobot from Debian packages [4], you have to install JdeRobot dependencies: sudo apt install jderobot-deps-dev

Compile component:

   git clone https://github.com/JdeRobot/slam-visual_markers.git
   cd slam-visual_markers
   
   ## Solving a bug of jderobot installation (temporally)
   mkdir /opt/jderobot/include/jderobot/jderobotutil/utils; 
   cp /opt/jderobot/include/jderobot/jderobotutil/CameraUtils.h /opt/jderobot/include/jderobot/jderobotutil/utils/
   ## 
   mkdir build && cd build
   cmake .. \
  -DJderobotInterfaces=/opt/jderobot/lib/libJderobotInterfaces.so \
  -Dcomm=/opt/jderobot/lib/libcomm.so \
  -Djderobotutil=/opt/jderobot/lib/libjderobotutil.so \
  -DparallelIce=/opt/jderobot/lib/libparallelIce.so \
  -Dcolorspacesmm=/opt/jderobot/lib/libcolorspacesmm.so \
  -DxmlParser=/opt/jderobot/lib/libxmlParser.so \
  -Dprogeo=/opt/jderobot/lib/libprogeo.so \
  -Dconfig=/opt/jderobot/lib/libconfig.so \
  -Dlogger=/opt/jderobot/lib/liblogger.so \
  -DlibjderobotHandlers=/opt/jderobot/lib/libjderobotHandlers.so \
  -DGlog=/usr/lib/x86_64-linux-gnu/libglog.so \
  -DlibIceStorm=/usr/lib/x86_64-linux-gnu/libIceStorm++11.so.36
   make
   sudo make install

Alternatively, you can compile this component with the file "installation.sh"

   sudo chmod 777 installation.sh
   ./installation.sh

To sum up, you can view a video of these steps. As my computer has the dependencies and the component is installed, I've used the docker jderobot image. To launch it:

    sudo docker run -ti jderobot/jderobot bash
    apt-get update; apt-get install zip git -y

Use[edit]

To use cam_autoloc or visual-markers is necessary 3 files:

1. sim_calib.yml: This file contains calibration parameters. Important! I have used the jderobot's tool: CameraCalibrator which writes the camera matrix and distortion coefficients with the name: camera_matrix and distortion_coefficients. If you use another tool, you have to change it.

2. markers.txt : This file contains the map. You have to write in the first line the makers size and the next lines the position's markers:

   Number_of_markers Coordinatex Coordinatey Coordinatez Roll Pitch Yaw

For example:

   0.28
   0 -8.90283 -4.82563 1.77057 0 1.56824 0
   1 -3.79959 0.736049 1.82504 1.56834 -0 0
   2 -2.19541 0.724153 1.76356 1.57188 -0 0
   3 -3.90063 -5.69347 1.64801 0 -1.53132 0
   ...

3. configuration.yml: This file contains the information to connect camera and publish Pose3d. For example, to connect with webcam after launch cameraserver cameraserver.cfg:

   VisualMarkers:
 Camera:
   Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
   Proxy: "cameraA:default -h localhost -p 9999"
   Format: RGB8
   Topic: "/TurtlebotROS/cameraR/image_raw"
   Name: VisualMarkersCamera1
 Pose3D:
   Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
   Proxy: "default -h 0.0.0.0 -p 8998"
   Topic: "/VisualMarkers/pose"
   Name: VisualMarkersPose3d
 NumMarker:
   Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
   Topic: "/visual_slam/numMarker"
   Name: VisualMarkersDetected
 Timer:
   Server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
   Topic: "/visual_slam/time"
   Name: VisualMarkersTime
 NodeName: VisualMarkers

Moreover of these files, It needs glade files: ardrone_slame.glade and introrob.glade

If everything is correct, launch it with:

   ./visual_markers config.yml sim_calib.yml

Summary[edit]

In this section I want to know how Manuel Zafra's project works so I have followed the next plan:

Moreover, to use correctly this node, we have to calibrate the cameras. So, I have used the jderobot's tool: CameraCalibrator where you only have to change the port to connect to the information.


I have got the calibration files of my webcam, Axion camera of the Kobuki and the Parrot's camera

cam_autoloc[edit]

To use te cam_autoloc component, we must install aruco library [5] and Opencv 3.2.0 [6] and link it in Qt creator's project.

Webcam[edit]

To connect to webcam's information you should follow the next steps:

1. Launch cameraserver cameraserver.cfg.

2. Launch ./cam_autoloc --Ice.Config=camweb.cfg in your terminal. The file camweb.cfg contains:

   # Interface to connect webcam
   CamAutoloc.Camera.Proxy = cameraA:default -h localhost -p 9999
   CamAutoloc.Pose3D.Proxy = default -h 0.0.0.0 -p 8998


Gazebo[edit]

Kobuki[edit]

Drone[edit]

If you want to use cam_autoloc in a simulated drone, you have to use the next configuration:

   # Camera RGB interface
   CamAutoloc.Camera.Proxy=Camera:default -h localhost -p 9000
   # Pose3D interface 9987
   CamAutoloc.Pose3D.Proxy=default -h 0.0.0.0 -p 8998 

Moreover, you have to launch a world in gazebo with the drone (in this example I have used "DroneTest.world") and uav_viewer to move around the world.

Besides that, I have used the world "AprilTagsFlat.world" of Manuel Project to simulate a possible world with tags:

cam_autoloc+navigator[edit]

Gazebo[edit]

Drone[edit]

Real[edit]

Drone[edit]

navigator[edit]

First steps[edit]

Summary[edit]

In this section, I following the next path:

Installing and Learning JdeRobot[edit]

First of all, I have installed JdeRobot from debian packages following the instructions: [7]. Then, I did the examples 2.1 and 5.1 from [8] to know how JdeRobot works.


Cameraserver + Cameraview:

Example: OpenniServer + RGBDViewer + Asus Xtion

Simulated ArDrone + UAVViewer

Example: Simulated TurtleBot + KobukiViewer


Example: Real TurtleBot + Kobuki_driver + KobukiViewer

Read old works[edit]

After that, I have to know the state of the art of Visual Slam so I read the old works from other students. First I read the Alberto Lopez 's Final Master Project: "Autolocalización visual robusta basada en marcadores" and its continuation: "Seguimiento de rutas 3D por un drone con autolocalización visual con balizas" by Manuel Zafra.

Moreover, I began the Eduardo Perdices García's docthoral thesis to know differents methods and algorithms.