Hustcalm-colab

From jderobot
Jump to: navigation, search

Title: Structure reconstruction using kinect2 device.

People[edit]

  • Lihang Li(licalmer [at] gmail [dot] com)
  • Francisco Miguel Rivas Montero (franciscomiguel [dot] rivas [at] urjc [dot] es)
  • José María Cañas Plaza (jmplaza [at] gsyc [dot] es)

Development[edit]

JdeRobot Framework[edit]

JdeRobot Publications[edit]

Two papers related to JdeRobot can be found at Jderobot open source framework for robotic, computer vision and home automation applications and Recent advances in the JdeRobot framework for robot programming . For more publications, see JdeRobot Publications.

JdeRobot Source Code[edit]

The source code lives in this Github repository.

CppLint[edit]

TBD.

Jenkins CI[edit]

JdeRobot Jenkins.

Development environment[edit]

Sensor[edit]

Microsoft Kinect v2(K4W2)

SDK -> libfreenect2[edit]

Instalation process[edit]

git clone https://github.com/OpenKinect/libfreenect2
sudo apt-get install -y build-essential libturbojpeg libtool autoconf libudev-dev cmake mesa-common-dev freeglut3-dev 
libxrandr-dev doxygen libxi-dev libopencv-dev automake
cd libfreenect2/depends
./install_ubuntu.sh
cd ..
export LIBFREENECT_ROOT=$PWD #you may want to include this ENV VARIABLE to your .bash.rc
cd examples/protonect/
cmake .
make 
sudo make install

See libfreenect2 official guide for more details.

Troubleshoot[edit]

  • libturbojpeg linker error:
 sudo ln -s /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0.0.0 /usr/lib/x86_64-linux-gnu/libturbojpeg.so
  • OpenCL:

If we get the error: error: CL/cl.hpp not found, you may want to install openCL (opencl-dev) or directly disable openCL:

 cmake -DENABLE_OPENCL=OFF . 
 make -j4
  • OpenGL deconding is not working:

If everything seems to work fine but the ir and depth images are completely black your gpu acceleration is not working, you may want to fix the problem (maybe due to a dual graphic card, you can try bumblebee) or disable gpu acceleration. To disable the gpu acceleration you have to change the line:

 OpenGLDepthPacketProcessor *depth_processor = new OpenGLDepthPacketProcessor(0, debug_); (on src/packet_pipeline.cpp:108)

to:

 CpuDepthPacketProcessor *depth_processor = new CpuDepthPacketProcessor();


Thats means that all the decoding will be computed in the cpu using multithreading parallelism.

  • Hide depth_packet_processor window
 static const bool do_debug = false; (on opengl_depth_packet_processor.cpp#L322)

Compatible hardware[edit]

  • Mini Pc

IntelNUC

  • USB controller

NEC D720200

other controllers

Components[edit]

kinect2Server[edit]

recorder[edit]

JdeRobot Recorder.

replayer[edit]

JdeRobot Replayer.

kinect2 structure reconstruction[edit]

Abstract[edit]

Two core components will be implemented, i.e. the Visual Odometry, which is responsible for Kinect tracking to get accurate pose of every frame(also loop closure); the Dense Mapping, in which all the depth and color frames will be aligned to a global coordinate, using a pose graph(maybe TORO) to achieve best results by optimization. After point cloud reconstruction done, meshing and other post-processing can be done to achieve beautiful models.

See the proposal for more details.

Timeline[edit]

4.27~5.24[edit]

Goal: Getting familiar with Ice programming and recorder-replay tool inside JdeRobot (similar to ROS bags).

Reference:

ZeroC Ice Training

JdeRobot Tools

5.25~5.31[edit]

Goal: Getting familiar with 3D data source. (1) Replayer as the data source, connect rgbdviewer to it (2) Gazebo simulator as the data source, connect rgbdviewer to it.

Reference:

Builder of Compacts 3D Maps from RGBD Sensor Data

Result:

replayer+kinect+rgbdViewer

Report:

Weekly Report - Week 1

6.1-6.7[edit]

Goal: Getting familiar with Ice programming. (1) Make recorder files and Gazebo simulator as 3D data sources work normally (2) Write my own Ice client component to read from the data sources

Reference:

Running JdeRobot

Developing JdeRobot

Kinect Environment

Report:

Weekly Report - Week 2

6.8-6.14[edit]

Goal: Start porting RTAB-Map to JdeRobot. (1) Implement a component to get RGB and depth images from the replayer component (2) Experiment with the recorded files using RTAB-Map, explore the registration of RGB and depth images

Reference:

libfreenect2 registration

Report:

Weekly Report - Week 3

6.15-6.21[edit]

Goal: Implement the I/O part between the Replayer component and rtabmap. (1) Make RTAB-Map compile under JdeRobot framework (2) Implement the I/O part, make the RGB and depth images from the Replayer component be the data source of the rtabmap component

Reference:

rtabmap CameraRGBD

Report:

Weekly Report - Week 4

6.22-6.28[edit]

Goal: Implement the reconstruction part of the rtabmap component. (1) Make the Odometry work properly (2) Make the reconstruction work using RGB and depth images

Reference:

rtabmap reconstruction

Report:

Weekly Report - Week 5

6.29-7.12[edit]

Goal: Make the reconstruction part of the rtabmap component work with Replayer. (1) Make the reconstruction part work properly with Replayer and other data sources (2) Submit the midterm evaluation

Reference:

rtabmap reconstruction

Kinect Calibration

Report:

Weekly Report - Week 6&7

7.13-7.19[edit]

Goal: Clean and refactor the rtabmap component. (1) Clean the CMakeLists.txt stuff and refactor the I/O along with reconstruction part (2) Add GUI stuff to let users choose new data sources (3) Get familiar with the calibration tool and loat yml file with calibration parameters to rtabmap

Reference:

rtabmap

Calibration Tool

Report:

Weekly Report - Week 8

7.20-7.26[edit]

Goal: Fix CMake merge issue, extract the keyframes and explore the camera pose estimation algorithms. (1) Clean the CMakeLists.txt files to make the rtabmap component consistent with JdeRobot framework (2) Store keyframes (position and rgd+d data) and see them in jderobotViewer (3) Explore algorithms which can use the keyframes and the data of the sensor at the current position to calibrate the pose using the 3D map as reference

Reference:

cvsba

sba

Report:

Weekly Report - Week 9

7.27-8.2[edit]

Goal: More test on storing and loading keyframes and design the camera pose estimation algorithms. (1) Store and load keyframes (position and rgd+d data) (2) Explore and design algorithms which can use the keyframes and the data of the sensor at the current position to calibrate the pose using the 3D map as reference

Reference:

cvsba

sba

Report:

Weekly Report - Week 10

8.3-8.9[edit]

Goal: Implement the 3D viewer to move the camera around, project the 3D points and try the camera pose estimation using cvsba. (1) Put a camera in the 3D map and move it around freely in the PCL visualizer (2) Try the camera pose estimation using cvsba

Reference:

pcl_visualizer

pcd_viewer

cvsba

sba

Report:

Weekly Report - Week 11

8.10-8.16[edit]

Goal: Estimate camera pose using sba or cvsba. (1) Develop the cameraPoseEstimator3D component using Qt (2) Use sba or cvsba to estimate camera pose

Reference:

cvsba

sba

Report:

Weekly Report - Week 12

8.17-8.23[edit]

Goal: Estimate camera pose using cvsba. (1) Use cvsba to estimate camera pose by utilizing two keyframes (2) Explore camera pose estimation using multiple keyframes

Reference:

cvsba

sba

Back Projection

JdeRobot myprogeo

Report:

Weekly Report - Week 13

8.24-8.30[edit]

Goal: Final evaluation of GSoC and project wrap-up. (1) Submit final evaluation (2) Integrate rtabmap and cameraPoseEstimator3D components to JdeRobot and write documentation (3) Discuss with mentors about future work as a contributor

Reference:

How do evaluations work?

Code Sample Submission Guidelines

Git team workflows: merge or rebase?

Merging vs. Rebasing

Report:

Weekly Report - Week 14

Resources[edit]

  • Kinect Datasets

http://www0.cs.ucl.ac.uk/staff/M.Firman/RGBDdatasets/

  • Libfreenect

https://github.com/OpenKinect/libfreenect2

  • RTAB-MAP

http://introlab.github.io/rtabmap/

  • Ice Getting Started

https://doc.zeroc.com/display/Ice/Hello+World+Application

https://zeroc.com/doc/Ice-3.2b/manual/Hello.4.3.html