Ilope-tfg

From jderobot
Jump to: navigation, search

Proyect Card[edit]

Project Name: The topic is not clear yet

Author: Irene Lope Rodríguez [i.lope@alumnos.urjc.es]

Academic Year: 2016/2017

Degree: Degree in Audiovisual Systems and Multimedia Engineering.

GitHub:[1]

Tags: JdeRobot, Gazebo, TeachingRobotics


Vacuum Practice[edit]

SOLUTION

  • Final version (short):


  • Final version (long):


  • v9 (Driving faster):

  • Visibility (another map):

  • Visibility:

  • v8:

  • v7.1:

  • v7:

  • v6:

  • v5:

  • v4:

  • v3:

  • v2:

  • v1:

  • v0:




  • World:

Updated:


  • Map:

Updated:


  • GUI:

The vacuum can be teleoperated:

Updated:


  • Referee:

Updated:


New widget: Bar


New widget: Analog clock


SOLUTION

  • v0:

  • v1:

  • v2:

AutoPark Practice[edit]

SOLUTION

  • v2

  • v1:

  • First steps with OMPL app:


taxiLaser model

We have introduced 3 laser to the model of yellowTaxi:


Autopark World


Plugin (Three lasers)

First, we checked that the plugin could connect the three lasers separately up:

  • Laser 1:


  • Laser 2:


  • Laser 3:


After, the three lasers at the same time:



GUI


Updated:


Final GUI:



REFEREE


Updated:


Stop Practice[edit]

SOLUTION

  • Final Version (left):

  • Final Version (right):


  • Final (left):

  • Final (right):


  • v8:

  • v7:

  • v6:

  • v5:

  • v4:

  • v3:

  • v2:

  • v1:



Updated world: new car + lamp post + gas station

Updated


We have also introduced a new plugin. Now we have got a car that moves on its own:


Different velocities and directions:


First world

To create a world in Gazebo we need to create the .world file. In this file we will introduce the models that we want with the <include> tag. We can use an already created model or create a new one. In the next page we can find several models: https://bitbucket.org/osrf/gazebo_models/src

<?xml version="1.0"?>
<sdf version="1.4">
  <world name="default">
  
    <scene>
      <grid>false</grid>
    </scene>
  
    <!-- A global light source -->
    <include>
      <uri>model://sun</uri>
    </include>
    
    
    <!-- Ground -->
    <include>
      <uri>model://ground_plane</uri>
    </include>
    
    
    <!-- Stop signs -->
    <include>
      <static>true</static>
      <uri>model://gazebo/models/stop_sign</uri>
      <pose>3 -3 0 0 0 0</pose>
    </include>
    
    <include>
      <static>true</static>
      <uri>model://gazebo/models/stop_sign</uri>
      <pose>3 3 0 0 0 1.55</pose>
    </include>
    
    
    <!-- Houses -->
    <include>
      <uri>model://house_1</uri>
      <pose>-8.5 7.5 0 0 0 0</pose>
    </include>
    
    <include>
      <uri>model://house_2</uri>
      <pose>7 6 0 0 0 0</pose>
    </include>
    
    <include>
      <uri>model://house_3</uri>
      <pose>-4.5 -6 0 0 0 1.55</pose>
    </include>
    
    
    <!-- A duck -->
    <model name="duck">
      <static>true</static>
      <link name="body">
        <visual name="visual">
            <pose>6 -5 0 1.5708 0 0</pose>
            <geometry>
                <mesh><uri>file://duck.dae</uri></mesh>
            </geometry>
        </visual>
      </link>
    </model>
    
    
    <!-- A taxi -->
    <include>
      <uri>model://yellowTaxi</uri>
      <pose>0 0 0.2 0 0 1.5</pose>
    </include>
    
    
    <!-- Roads -->
    <road name="my_road_1">
      <width>6</width>
      <point>0 -20 0.02</point>
      <point>0 20 0.02</point>
    </road>
    
    <road name="my_road_2">
      <width>6</width>
      <point>-20 0 0.02</point>
      <point>10 0 0.02</point>
      <point>20 0 2.5</point>
      <point>25 0 2.5</point>
      <point>35 0 0.02</point>
      <point>40 0 0.02</point>
    </road>
    
    
  </world>
</sdf>

To see the Gazebo world, write on a terminal:

gazebo nameofyourfile.world


Setting a road with elevations up:

We use the modeling program SketchUp. We make a model of a road which will be exported with the .dae extension.

We introduce the model in a Gazebo world.

<?xml version="1.0"?>
<sdf version="1.4">
  <world name="default">
  
    <scene>
      <grid>false</grid>
    </scene>
  
    <!-- A global light source -->
    <include>
      <uri>model://sun</uri>
    </include>
    
    <!-- A kobuki -->
    <include>
        <uri>model://turtlebotJde2cam</uri>
        <pose>40 30 8 0 0 0</pose>
    </include>
    
    <!-- A road -->
    <model name="road">
      <static>true</static>
      <link name="body">
        <pose>0 -4 -1.25 0 0 0</pose>
        <collision name="collision">
          <geometry>
            <mesh><uri>model://carretera_1.dae</uri></mesh>
          </geometry>   
        </collision>
        <visual name="visual">
          <geometry>
            <mesh><uri>file://carretera_1.dae</uri></mesh>
          </geometry>
        </visual> 
      </link> 
    </model>

  </world>
</sdf>

We can indicate with the [collision] tag that this model is solid. We can test it by putting a robot to go over it.


GUI


Updated:

Practices of Teaching Robotics[edit]

To perform the practices described below, the previous installation of the Gazebo simulator, JdeRobot software and later Teaching Robotics package are necessary.

Once the installation is completed we can start making the practices.

Practice 1: Follow Line

The aim of this practice is to make a Formula 1 car be able to follow a red line painted on a race circuit. To carry the practice out, we write on a terminal:

  • If we want to see the Gazebo’s world and the images from the cameras:
./run_it.sh GUI
  • If we only want to see the images from the cameras:
./run_it.sh

If we run the practice we can see the circuit that Formula 1 must follow.


Besides, the original images captured by the cameras that the car has got installed and the processed ones appear.


The practice’s code will be written in the file ‘MyAlgorithm.py’.We will follow the next steps:

1 .Import the OpenCV library for image processing:

import cv2

2. Obtain the camera images:

imageLeft = self.sensor.getImageLeft()
imageRight = self.sensor.getImageRight()

3. Switch RGB to HSV model to do the filtering of the line:

imageRight_HSV = cv2.cvtColor(imageRight,cv2.COLOR_RGB2HSV)
imageLeft_HSV = cv2.cvtColor(imageLeft, cv2.COLOR_RGB2HSV)


4. Set the threshold to filter the color of the line:

value_min_HSV = np.array([0, 235, 60])
value_max_HSV = np.array([180, 255, 255])

5. Filter the images with the chosen threshold:

imageRight_HSV_filtered = cv2.inRange(imageRight_HSV, value_min_HSV, value_max_HSV)
imageLeft_HSV_filtered = cv2.inRange(imageLeft_HSV, value_min_HSV, value_max_HSV)

6. Create a mask with the red pixels in the foreground:

imageRight_HSV_filtered_Mask = np.dstack((imageRight_HSV_filtered, imageRight_HSV_filtered, imageRight_HSV_filtered))
imageLeft_HSV_filtered_Mask = np.dstack((imageLeft_HSV_filtered, imageLeft_HSV_filtered, imageLeft_HSV_filtered))

7. To display the processed images on the screen:

self.setRightImageFiltered(imageRight_HSV_filtered_Mask)
self.setLeftImageFiltered(imageLeft_HSV_filtered_Mask)


8. With shape we get the number of columns and rows of an image:

size = imageLeft.shape
rows = size[0]
columns = size[1]

9. To locate the line we must look for the pixels which change from black to white (right edge)  and white to black (left edge):

position_pixel_left = []
position_pixel_right  = []

for i in range(0, columns-1):
    value = imageLeft_HSV_filtered[365, i] - imageLeft_HSV_filtered[365, i-1]
    if(value != 0):
        if (value == 255):
            position_pixel_left.append(i)
        else:
            position_pixel_right.append(i-1)

10. Calculate the middle position of the line:

if ((len(position_pixel_left) != 0) and (len(position_pixel_right) != 0)):
    position_middle = (position_pixel_left[0] + position_pixel_right[0]) / 2
elif ((len(position_pixel_left) != 0) and (len(position_pixel_right) == 0)):
    position_middle = (position_pixel_left[0] + columns) / 2
elif ((len(position_pixel_left) == 0) and (len(position_pixel_right) != 0)):
    position_middle = (0 + position_pixel_right[0]) / 2
else:
    position_pixel_right.append(1000)
    position_pixel_left.append(1000)
    position_middle = (position_pixel_left[0] + position_pixel_right[0])/ 2

11. Calculate the deviation from the center of the image to determine if there is a straight line or a curve and then change the angular velocity of the Formula 1 for rotating:

desviation = position_middle - (columns/2)
if (desviation == 0):
    self.sensor.setV(10)
elif (position_pixel_right[0] == 1000):
    self.sensor.setW(-0.0000035)
elif ((abs(desviation)) < 85):
    if ((abs(desviation)) < 15):
        self.sensor.setV(6)
    else:
        self.sensor.setV(3.5)
    self.sensor.setW(-0.000045 * desviation)
elif ((abs(desviation)) < 150):
    if ((abs(desviation)) < 120):
        self.sensor.setV(1.8)
    else:
        self.sensor.setV(1.5)
    self.sensor.setW(-0.00045 * desviation)
else:
    self.sensor.setV(1.5)
    self.sensor.setW(-0.005 * desviation)


Here is a video with the code running:

Code development: Vanessa Férnandez. [2]



More OpenCV functions

If we want to show a gray image, we must follow the next steps:

1. Obtain the camera images:

imageLeft = self.sensor.getImageLeft()
imageRight = self.sensor.getImageRight()

2. Use the cvtColor function, where the first argument is the original image, and the second one is the COLOR_BGR2GARY function, which converts a RGB image to gray image:

imageGRAYLeft = cv2.cvtColor(imageLeft, cv2.COLOR_BGR2GRAY)
imageGRAYRight = cv2.cvtColor(imageRight, cv2.COLOR_BGR2GRAY)

3. Due to the gray image has only one channel, we need to add up to 3 more channels to display the images on the screen:

imageGRAYLeftFinal = np.dstack((imageGRAYLeft, imageGRAYLeft, imageGRAYLeft))
imageGRAYRightFinal = np.dstack((imageGRAYRight, imageGRAYRight, imageGRAYRight))

4. Display the image on the screen:

self.setLeftImageFiltered(imageGRAYLeftFinal)
self.setRightImageFiltered(imageGRAYRightFinal)