Rperez-tfg

From jderobot
Jump to: navigation, search

Project Card[edit]

Project Name: Web technology in JdeRobot nodes, Electron and Question & Answer Forum

Author: Roberto Pérez González [rpg0702@gmail.com]

Academic Year: 2017/2018

Degree: Degree in Audiovisual Systems and Multimedia Engineering

GitHub Repositories: 2017-tfg-roberto-perez[1]

Tags: Electron, Web Technology, JdeRobot, AskBot, Forum

Web technology in JdeRobot nodes

3DViewer-web and 3D Reconstruction practice[edit]

Now, we're going to connect our 3DViewer-web with practice of Academy 3DReconstruction. For that, it's needed to modify the practice, the Ice Interface and our 3DViewer, because Ice sever side to JavaScript isn't full implement. Until now, the practice was the client (he had the initiative to send points) and the 3DViewer was the server (he received the points and showed them). Now the server will be our 3DViewer (it will request the points that it must show) and the practice will be the server (it will receive the requests and it will respond with the points to show).

The first I've changed is the visualization interface:

module jderobot{

	struct Color{
	    float r;
	    float g;
	    float b;
	};
 
	struct RGBSegment{
	    Segment seg;
	    Color c;
	};
	
	sequence<RGBPoint> bufferPoint; 
	sequence<RGBSegment> bufferSegment;
  /**
   * Interface to the Visualization interaction.
   */
	interface Visualization
	{
        void drawSegment(Segment seg, Color c);
	bufferSegment getSegment();
        void drawPoint(Point p, Color c);
	bufferPoint getPoints();
        void clearAll();
	};

As you can see, I have created a new class that will be formed by a segment and a color. I have also generated two lists where both the points and the lines to be sent will be stored when the server requests them. Finally I created two new functions to return the lists.

The changes of the practice:

#lists of points and segments
bufferpoints = []
bufferline = []

#define the ice interface
class PointI(jderobot.Visualization):
    def getSegment(self, current=None):
        rgblinelist = []
        rgblinelist = bufferline
        return rgblinelist
        
    def drawPoint(self,point, color, current=None):
        print point

    def getPoints(self, current=None):
        rgbpointlist = []
        for i in bufferpoints[:]:
            rgbpointlist.append(i)
            index = bufferpoints.index(i)
            del bufferpoints[index]
        return rgbpointlist

    def clearAll(self, current=None):
        print "Clear All"

#create the server with Ice
try:
    endpoint = "default -h localhost -p 9957:ws -h localhost -p 11000"
    id = Ice.InitializationData()
    ic = Ice.initialize(None, id)
    adapter = ic.createObjectAdapterWithEndpoints("3DViewerA", endpoint)
    object = PointI()
    adapter.add(object, ic.stringToIdentity("3DViewer"))
    adapter.activate()
    #ic.waitForShutdown()
except KeyboardInterrupt:
	del(ic)
	sys.exit()

#save the segments in the list
def getbufferSegment (seg,color):
    rgbsegment = jderobot.RGBSegment()
    rgbsegment.seg = seg
    rgbsegment.c = color
    bufferline.append(rgbsegment)

#save the points in the list
def getbufferPoint(point, color):
    rgbpoint = jderobot.RGBPoint()
    rgbpoint.x = point.x
    rgbpoint.y = point.y
    rgbpoint.z = point.z
    rgbpoint.r = color.r
    rgbpoint.g = color.g
    rgbpoint.b = color.b
    bufferpoints.append(rgbpoint)


To modify the practice, we have created a new python module that will define the functions that will save the data and create the Ice server, defining, also, the Ice interface


Finally, the modifications of the 3D viewer:

Worker

function setPoint(point){
  srv.getPoints().then(function(data){
    point = data;
    self.postMessage({func:"drawPoint",points: point});
  });
}

function setLine(){
  srv.getSegment().then(function(data){
      segments = data;
      self.postMessage(segments);
  });
}

3DViewer

function setLine(){
			w.postMessage({func:"setLine"});
			w.onmessage = function(event){
				if (event.data.length > 0){
						segments = event.data;
						for (var i = 0; i < segments.length; i+=1) {
		        	addLine(segments[i]);
						}
						setPoint();
			}}
		}

    function setPoint(){
      w.postMessage({func:"setPoint"});
      w.onmessage = function(event) {
				if (event.data.func == "drawPoint"){
				points = event.data.points;
				for (var i = 0; i < points.length; i+=1) {
        	addPoint(points[i]);
				}
				setInterval(setPoint(),1000);
    }}
    }

GUI

function addPoint (point){
	var geometry = new THREE.Geometry();
	geometry.vertices.push( new THREE.Vector3(point.x,point.z,point.y));
	var sprite = new THREE.TextureLoader().load("img/disc.png");
	var material = new THREE.PointsMaterial( { size: 8, sizeAttenuation: false, map: sprite, alphaTest: 0.5, transparent: true } );
	material.color.setRGB( point.r, point.g, point.b);
	var particles = new THREE.Points( geometry, material );
	particles.name ="point";
	scene.add( particles );
}

function addLine(segment){
	var geometry = new THREE.Geometry();
	geometry.vertices.push(
		new THREE.Vector3(segment.seg.fromPoint.x,segment.seg.fromPoint.z,segment.seg.fromPoint.y),
		new THREE.Vector3(segment.seg.toPoint.x,segment.seg.toPoint.z,segment.seg.toPoint.y),
		new THREE.Vector3(segment.seg.fromPoint.x,segment.seg.fromPoint.z,segment.seg.fromPoint.y));
		var material = new THREE.LineBasicMaterial();
		material.color.setRGB(segment.c.r,segment.c.g, segment.c.b)
		line = new THREE.Line(geometry,material);
		scene.add(line);
}

The next video show how they run:

Changes in CameraServer-web[edit]

Part of the code we use from getUserMedia has been deprecated, so we must modify it to work correctly. In addition, we add the option to choose the video source we want to use (in case we have more than one camera connected).

Previously we used the following code:

function cameraOn() {
  navigator.getMedia = ( navigator.getUserMedia ||
           navigator.webkitGetUserMedia ||
           navigator.mozGetUserMedia ||
           navigator.msGetUserMedia);

  navigator.getMedia(
    {
     video: true,
     audio: false
    },
    function(stream) {
      cameraStream = stream;
      if (navigator.mozGetUserMedia) {
          video.mozSrcObject = stream;
      } else {
          var vendorURL = window.URL || window.webkitURL;
          video.src = vendorURL.createObjectURL(stream);
      }
      video.play();
   },
   function(err) {
      console.log("An error occured! " + err);
   }
  );
}

which must be modified:

function cameraOn(){
            var constraints = {audio: false, video: true};
            navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
              video.srcObject = stream;
              video.onloadedmetadata = function(e) {
                video.play();
              };
            }).catch(function(err) {
              alert(err.name + ": " + err.message);
            });
      }

To add the video source selector we will use navigator.mediaDevices.enumerateDevices (). First we create the selectro that will be shown in the html:

navigator.mediaDevices.enumerateDevices()
    .then(function (devices) {
      for (var i = 0; i !== devices.length; ++i) {
        var deviceInfo = devices[i];
        var option = document.createElement('option');
        option.value = deviceInfo.deviceId;
        if (deviceInfo.kind === 'videoinput') {
          option.text = deviceInfo.label || 'Camera ' +
            (videoSelect.length + 1);
            videoSelect.appendChild(option);
        }
}});

Finally, we made some more modifications to the Camara function, definitively like this:

function cameraOn(){

          videoSource = $("#videoSource").val();
          var constraints = {};
          if (videoSource == null) {
            constraints = {audio: false, video: true};
          } else{
            constraints = { audio: false, video: { deviceId: {exact: videoSource}}};
          }
            navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
              video.srcObject = stream;
              video.onloadedmetadata = function(e) {
                video.play();
              };
            }).catch(function(err) {
              alert(err.name + ": " + err.message);
            });
 }

UavViewer-web with JdeRobot Docker[edit]

To run the webtools on different operating systems, we will need to use the JdeRobot docker [[2]].

First of all, we must have docker installed and then download the JdeRobot docker image (docker pull jderobot/jderobot). Once we have downloaded the image, we can launch the docker with the following command:

docker run -tiP --rm -p 11000:11000 jderobot/jderobot

Now we launch gzserver but using the rungzserver command:

rungzserver road_drone_textures.world

With this command we have already launched the gazebo server and we can connect it with our UavViewer-web. The following video shows how it works:



New Driver for JdeRobot: CameraServer-web[edit]

Now, let's create a new driver for JdeRobot plataform. This driver is a CameraServer that will run with Electron and connect with our CameraView through ROS (as we already know, the ICE Server Side for JavaScript isn't yet implemented).

The first is to access the camera through getUserMedia. After, we will convert the frame to a Ros Compressed Image and, last, we will create a Ros Publisher and publish the frame. The code step by step is the following:


We get access to the camera through getUserMedia:

function cameraOn() {
  navigator.getMedia = ( navigator.getUserMedia ||
           navigator.webkitGetUserMedia ||
           navigator.mozGetUserMedia ||
           navigator.msGetUserMedia);

  navigator.getMedia(
    {
     video: true,
     audio: false
    },
    function(stream) {
      cameraStream = stream;
      if (navigator.mozGetUserMedia) {
          video.mozSrcObject = stream;
      } else {
          var vendorURL = window.URL || window.webkitURL;
          video.src = vendorURL.createObjectURL(stream);
      }
      video.play();
   },
   function(err) {
      console.log("An error occured! " + err);
   }
  );
}

We connect with ROS WebSocket and create the Publisher, where the topic and the message type are got through yml file.

var ros = new ROSLIB.Ros();

ros.on('connection', function() { console.log('Connected to websocket server.');});

ros.on('error', function(error) { console.log('Error connecting to websocket server: ', error); window.alert('Error connecting to websocket server'); });

ros.on('close', function() { console.log('Connection to websocket server closed.');});

var imageTopic = new ROSLIB.Topic({
    ros : ros,
    name : config.topic,
    messageType : config.msgs
  });

Last, we convert the frame to a Ros Compressed Image and publishing it

function takepicture() {
      canvas.width = width;
      canvas.height = height;

      canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
      var data = canvas.toDataURL('image/jpeg');
      var imageMessage = new ROSLIB.Message({
          format : "jpeg",
          data : data.replace("data:image/jpeg;base64,", "")
      });

      imageTopic.publish(imageMessage);
}

The function takepicture() is called through setInterval(), where we choose the frame rate at changing the time of setInterval:

cameraTimer = setInterval(function(){
    takepicture();
}, (1000/config.fps));

CameraView with ICE and ROS[edit]

Now, we're going to use the same project for our CameraViews (ICE and ROS). For that, we're going to use a yml file. This yml will have a line where we will indicate the type of server we have. The yml file is the following:

server: 2 # 0 -> Deactivate, 1 -> Ice , 2 -> ROS
serv:
  dir: "localhost"
  port: "11000"
size:
  wd: 640
  hg: 480
endpoint: "CameraA" # endpoint for ICE
subscribe: # Topic for ROS
  topic: "/usb_cam/image_raw/compressed"
  msgs: "sensor_msgs/CompressedImage"

And we run ICE CameraView or Ros CameraView depending on the number indicated on "server". If the number is 1, we will call html file for ICE and if the number is 2 we call html file for ROS. That, we we will do it through electron main.js. The result is the next:

function createWindow () {
  // Create the browser window.
  win = new BrowserWindow({width: 1800, height: 1000})

  // ICE
  if (config.server == 1) {
    urlselect = 'cameraviewIce.html';
    
 //ROS
  } else if (config.server == 2) {
    urlselect = 'cameraviewRos.html';
  }
  win.loadURL(url.format({
    pathname: path.join(__dirname, urlselect),
    protocol: 'file:',
    slashes: true
  }))

The following video show the final result:

RosCameraView with Electron[edit]

We're going to continue with our RosCameraView. In last step we got to receive the frames but in strange format and we couldn't show them in a HTML Canvas, becaus it only admite RGBA image. We must change the subject and the message to which we subscribe.

At now, our subscription is the next:

roscam = new ROSLIB.Topic({
   ros : ros,
   name : "/usb_cam/image_raw/compressed",
   messageType : "sensor_msgs/CompressedImage"
 });

In this way, we can already show the image using the html element "img", but before, we must decompress the frames to the next form:

  self.roscam.subscribe(function(message){
          var imagedata = "data:image/jpg;base64," + message.data;

And, we already show the video in a img element:

img.setAttribute('src', imagedata);

3D Viewer with ICE and Jderobot Interface[edit]

We're going to continue with our 3DViewer with Electron and ICE. The last that we made it was connect our 3Dviewer with a test server that it send us the points, this we made it with a Slice interface created by us. At now, we're going to use a Jderobot Slice interface (with a littel modify that I'm goint to explaine more later). The first thing that we must know about ICE connection with JavaScript is that the server function isn't implemented for JavaScript, however the Jderobot interface is desing for the GUI will be the server (the server will recieve the points) and the sender will be the client. This structure isn't posible with JavaScript (How I has said previuosly, ICE server isn't implemented for JavaScript). This is the actual interface:

module jderobot{

	struct Color{
	    float r;
	    float g;
	    float b;
	};


  /**
   * Interface to the Visualization interaction.
   */
	interface Visualization
	{
        void drawSegment(Segment seg, Color c);
        void drawPoint(Point point, Color c);
        void clearAll();
	};
}
;
#endif

And defined the point of the next form:

module jderobot{

	/**
	* PCL
	*/
	struct RGBPoint{
      float x;
      float y;
      float z;
      float r;
      float g;
      float b;
	  int id;
	};

	struct Point{
	    float x;
	    float y;
	    float z;
	};

	struct Segment{
	    Point fromPoint;
	    Point toPoint;
	};

};
#endif

How we can see, these functions don't return any value, only receive it. We must do that it return some value (in our case a RGB Point). The new interface will be something like this:

module jderobot{

	struct Color{
	    float r;
	    float g;
	    float b;
	};


  /**
   * Interface to the Visualization interaction.
   */
	interface Visualization
	{
        void drawSegment(Segment seg, Color c);
        void drawPoint(Point point, Color c);
        RGBPoint getRGBPoint(RGBPoint point);
        void clearAll();
	};




};
#endif

we have changed "void" to "RGBPoint" and only send an argument.

Once we have made this, we alredy use our 3DViewer with Jderobot interface. The next video show this example:

Starting with RosCameraView with Electron[edit]

We're starting to create our CameraView with Ros and Electron. For that, we will use Ros service usb_cam and we must create a Subscriptor to recive the data of usb_cam that it will be the Publisher in this case. How the last point, the connection data and the Ros information will be parser using yml. Our JavaScript code would be something like this:

onst electron = require("electron");
const {ipcRenderer} = electron;
const yaml = require('js-yaml');
const fs = require('fs');


let config = {};

try {
    config = yaml.safeLoad(fs.readFileSync('config.yml', 'utf8'));
} catch (e) {
    console.log(e);
}
// This function connects to the rosbridge server running on the local computer on port 9090
var ros = new ROSLIB.Ros({
    url : "ws://" + config.Address + ":" + config.Port
 });

 // This function is called upon the rosbridge connection event
 ros.on('connection', function() {
     // Write appropriate message to #feedback div when successfully connected to rosbridge
     console.log("Connect websocket")
 });

// This function is called when there is an error attempting to connect to rosbridge
ros.on('error', function(error) {
    // Write appropriate message to #feedback div upon error when attempting to connect to rosbridge
    console.log("Error to connect websocket")
});

// This function is called when the connection to rosbridge is closed
ros.on('close', function() {
    // Write appropriate message to #feedback div upon closing connection to rosbridge
    console.log("Disconnect websocket");
 });

// These lines create a topic object as defined by roslibjs
var roscam = new ROSLIB.Topic({
    ros : ros,
    name : config.Topic,
    messageType : config.Msgs
});

function start() {
roscam.subscribe(function(message){
  console.log(message);
});
}

function stop(){
  roscam.unsubscribe();
}

How we can see, the Subscriptor is very similar to the Publisher, only change the publish (send message) by subscribe (recive message). The yml file is the next:

Address: "localhost"
Port: "9090"
Format: RGB8
Topic: "/usb_cam/image_raw"
Name: "CameraView"
Msgs: "sensor_msgs/Image"

In this time, before to run with rosrun usb_cam, we must run roslaunch usb_cam to init the cam. Then, we will start roslaunch websocket, rosrun usb_cam and the last, we will run our Electron project. The following video show this first CameraView steps:

First Publisher with Rosbridge and Electron[edit]

In this point, we're going to create our first Publisher with Ros and Electron. We will use Ros TurtleSim server like Subscriptor and we will send a distance and angle to move our Turtle. Additionally, we will pass the connection data through a yml file. Following, we are looking at the code (all is comment) :


const electron = require("electron");
const {ipcRenderer} = electron;
const yaml = require('js-yaml');
const fs = require('fs');

let config = {};
try {
    config = yaml.safeLoad(fs.readFileSync('config.yml', 'utf8'));
} catch (e) {
    console.log(e);
}

// This function connects to the rosbridge server running on the local computer on port 9090

var ros = new ROSLIB.Ros({
    url : "ws://" + config.Address + ":" + config.Port
 });

 
 // This function is called upon the rosbridge connection event
 ros.on('connection', function() {
     // Write appropriate message to #feedback div when successfully connected to rosbridge
     console.log("Connect websocket")
 });

// This function is called when there is an error attempting to connect to rosbridge
ros.on('error', function(error) {
    // Write appropriate message to #feedback div upon error when attempting to connect to rosbridge
    console.log("Error to connect websocket")
});

// This function is called when the connection to rosbridge is closed
ros.on('close', function() {
    // Write appropriate message to #feedback div upon closing connection to rosbridge
    console.log("Disconnect websocket");
 });

// These lines create a topic object as defined by roslibjs
var cmdVelTopic = new ROSLIB.Topic({
    ros : ros,
    name : config.Topic,
    messageType : config.Msgs
});

// These lines create a message that conforms to the structure of the Twist defined in our ROS installation
// It initalizes all properties to zero. They will be set to appropriate values before we publish this message.
var twist = new ROSLIB.Message({
    linear : {
        x : 0.0,
        y : 0.0,
        z : 0.0
    },
    angular : {
        x : 0.0,
        y : 0.0,
        z : 0.0
    }
});

/* This function:
 - retrieves numeric values from the text boxes
 - assigns these values to the appropriate values in the twist message
 - publishes the message to the cmd_vel topic.
 */
function pubMessage() {
    /**
    Set the appropriate values on the twist message object according to values in text boxes
    It seems that turtlesim only uses the x property of the linear object
    and the z property of the angular object
    **/
    var linearX = 0.0;
    var angularZ = 0.0;

    // get values from text input fields. Note for simplicity we are not validating.
    linearX = 0 + Number(document.getElementById('linearXText').value);
    angularZ = 0 + Number(document.getElementById('angularZText').value);

    // Set the appropriate values on the message object
    twist.linear.x = linearX;
    twist.angular.z = angularZ;

    // Publish the message
    cmdVelTopic.publish(twist);
}

The yml file is the next:

Address: "localhost"
Port: "9090"
Topic: "/turtle1/cmd_vel"
Name: "TurtleSim"
Msgs: 'geometry_msgs/Twist'

Once we have done this, we will run all Ros Service (roslaunch for websocket and rosrun for TurtleSim server). The following video show this example:

Firsts Steps with Rosbridge and Electron[edit]

In this first approach to ROS, we're going to following the tutorial of official web page of ROS [3]. We must have to install ROS plataform in our computers, for this we can look at the ROS web page [4]. Once time that we have to install the plataform, we're going to create a Electron project for our first ROS example (main.js, package.json, etc). For this first example, we're going to create a GUI that it shows the first n elements of the fibonacci series where n is the order of the series. In this section, only I show how it would be the client. To develop Ros projects with javascript we must use the library roslibjs.

In this example, the code JavaScript is going to embedded in the html. The code is the following (the code is comment):


<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<script type="text/javascript" src="https://static.robotwebtools.org/EventEmitter2/current/eventemitter2.min.js"></script>
<script type="text/javascript" src="roslib.js"></script>

</head>

<body>
  <h1>My first Ros project</h1>
  <p id="fibonacci"></p>
</body>
<script type="text/javascript" type="text/javascript">
  var pf;
//initializing the ROS connection
  var ros = new ROSLIB.Ros({
    url : 'ws://localhost:9090'
  });

//Ability ActionClients to send goal and receive responses 
  var fibonacciClient = new ROSLIB.ActionClient({
    ros : ros,
    serverName : '/fibonacci',
    actionName : 'actionlib_tutorials/FibonacciAction'
  });

  //Sending goal with order of fibonacci serie and start it
  var goal = new ROSLIB.Goal({
    actionClient : fibonacciClient,
    goalMessage : {
      order : 10
    }
  });

  pf = document.getElementById('fibonacci');

  //R
  goal.on('feedback', function(feedback) {
    console.log('Feedback: ' + feedback.sequence);
    pf.innerHTML = "Fibonacci: " + feedback.sequence;
  });

  goal.on('result', function(result) {
    console.log('Final Result: ' + result.sequence);
    pf.innerHTML = "Fibonacci: " + result.sequence;
  });

  ros.on('connection', function() {
    console.log('Connected to websocket server.');
  });

  ros.on('error', function(error) {
    console.log('Error connecting to websocket server: ', error);
  });

  ros.on('close', function() {
    console.log('Connection to websocket server closed.');
  });

  goal.send();
</script>
</html>

To run this example, we must execute the next commands:

rosrun actionlib_tutorials fibonacci_server
roslaunch rosbridge_server rosbridge_websocket.launch

Once, we have execute this, we alredy run Electron. The next video show this example:

3D Viewer with ICE[edit]

We're going to continue with our 3D Viewer. We already know how to create a point and add in our scene, so we already can to connect with a server to receive the point from it. The first step is create a interface ice where we will indicate the structures and functions of our ICE connect. This interface is written in Slice lenguage and it must compile with slice2 + lenguage (example: slice2js to compile for JavaScript). We're going to use the jderobot interfaces (visulaization.ice). For this example, we're going to change some functions. Visualization.ice would be something like this.

module jderobot
{
   struct RGBPoint{
          float x;
          float y;
          float z;
          float r;
          float g;
          float b;
 };

   struct Segment{
           RGBPoint fromRGBPoint;
           RGBPoint toRGBPoint;
};

	interface Visualization
	{
        RGBPoint drawPoint(RGBPoint point );
        void drawSegment(Segment seg, Color c);
        void clearAll();
	};
};

Once time we've created this interface and compiled with slice2js and slice2py (for our test server), we already can implent the ICE code in our 3D Viewer. The ICE code is going to go in our main.js and we will connect with our GUI through IPC event of Electron. The code of main.js is the following:


var srv;
var Ice = require("ice").Ice;
var Demo = require("./visualization").jderobot;
var Promise;
var ic = Ice.initialize();
var rgbpoint;
...
...
...
//Create IPC
ipcMain.on("async",(event,arg) =>{
  //if arg is equal 1 connect with Server else, we will ask for a point
  if (arg == 1){
    var proxy = ic.stringToProxy("SimplePrinter:default -p 10000");
    Promise = Demo.VisualizationPrx.checkedCast(proxy).then(
      function(data)
      {
        srv = data;
        console.log("TRUE");
      });
    } else {
      srv.drawPoint(point).then(function(data){
        //Send IPC event to our GUI
        event.sender.send("async-reply",data);
    });
}
});

JavaScript code of our GUI is the next:


function webGLStart (){
        init();
        addGrid();
	addAxes();
        animate();
        //Send ipc event to start the connect with the server
	ipcRenderer.send("async", 1);
 }

//function to send the ipc message to request a point
function createPoint(){
	ipcRenderer.send("async",2);
}

//we received the ipc message with the point
ipcRenderer.on("async-reply",(event,arg) =>{
        addPoint(arg)
});

//function to clear all the points
function clearAll(){
	var selectedObject = scene.getObjectByName("point");
	while (selectedObject != null) {
	      scene.remove(selectedObject);
	      selectedObject = scene.getObjectByName("point");
	}
}

Finally, we're going to create a test server with Python to try our 3DViewer. The server will give us random point each time that we request one. the server is the following:

import sys, Ice
import jderobot
import time
import random

class PointI(jderobot.Visualization):
    #define the function to send the point
    def drawPoint(self, point, current=None):

        point.x = random.randint(-50, 50)
        point.y = random.randint(-50, 50)
        point.z = random.randint(-50, 50)
        point.r = random.uniform(0, 1)
        point.g = random.uniform(0, 1)
        point.b = random.uniform(0, 1)
        return point

try:
    # Init the ICE communication
    communicator = Ice.initialize(sys.argv)
    adapter = communicator.createObjectAdapterWithEndpoints("SimplePrinterAdapter", "default -p 10000")
    object = PointI()
    adapter.add(object, communicator.stringToIdentity("SimplePrinter"))
    adapter.activate()
    print "Listening at port 10000"
    communicator.waitForShutdown()
except:
    traceback.print_exc()
    status = 1
    if communicator:
        communicator.destroy()

sys.exit(status)

Below is a video with 3D Viewer

WebTools in different Operating Systems (OS)[edit]

At now, we're going to use our WebTools in MacOS and Windows. To do this, we only need to install NodeJS in either OS and install Electron (npm install electron). Once time that we have done this, we must run Electron using the shell (npm start). The results are following:

WebTools with Electron, ICE and NodeJS[edit]

In this first approach to web tools and how ICE works with Electron, we are going to convert the web tools of JdeRobot to Electron. For that, we must be install JdeRobot enviroment and know some things previously.

The first thing is that JQuery isn't imported as normally, so we must install the JQuery package with the following command:

npm install jquery

Once time we have installed, we must import in our html file the next line:

<script>
    window.$ = window.jQuery = require('jquery');
</script>

This way we can already use jQuery in our project.

The second thing is that we must creat the package.json where we will indicate that this project is going to run with electron, also it's going to indicate the dependences (ICE,jQuery, Electron, etc) and the backend file (main.js).

Finally, we must create the backend file (main.js), where it will be created the window that it will show our GUI. This step is the same in all Electron application (look at the previusly Electron projects).

Once time we know this three things. we already can run webtools with Electron and NodeJs because the rest of files run correctly in Electron if we have made the previously things.

Node CameraView

This is the nodejs and Electron version of the CameraViewJS. To run Node CameraView we must run the next command:

Cameraserver cameraserver.cfg

This step is the same one that we use to run CameraViewJS, however we don't need to run the server because Electron and its backend make the server function (serve the html file). Therefore, once the installation of electron (npm install electron) is done, we can run our Node CameraView. The next video shows the CameraView:

Node KobukiViewer

This is the nodejs and Electron version of the KobukiViewerJS. The first step is run gazebo with the next command:

gazebo kobuki-simple.world

The same form with CameraView, we don't need to run the server. The next video shows this tool running

Node UavViewer

This is the nodejs and Electron version of the UavViewer. The same forma that previously tools, we don't execute the server, else electron simulates this server.

Adding point dynamically[edit]

Once time that we known how we create object using Three.js and Electron, we're going to add a point dynamically introducing the parameters by input fields of our GUI. We're going to work on Scene3D project.

We're going to use the Point object of the three.js and we will add four input field in our GUI (X, Y, Z coordinates and point's size). The code is the following:

index.html

<div>
	<p class="createPoint">X AXIS: <input type = "number" id = "vertx"></input></p>
	<p class="createPoint">Y AXIS: <input type = "number" id = "verty"></input></p>
	<p class="createPoint">Z AXIS: <input type = "number" id = "vertz"></input></p>
	<p class="createPoint">SIZE: <input type = "number" id = "size"></input></p>
	<button onclick="createPoint()">Create</button>
<div>

Object.js

function addPoint (verx,very,verz,sizep){

	var geometry = new THREE.Geometry();
       //creating the geometry with parameters entered
	geometry.vertices.push( new THREE.Vector3(verx,very,verz));
       //put texture
	var sprite = new THREE.TextureLoader().load( "img/disc.png" );
	var material = new THREE.PointsMaterial( { size: sizep, sizeAttenuation: false, map: sprite, alphaTest: 0.5, transparent: true } );
	material.color.setHSL( 1.0, 0.3, 0.7 );
	var point = new THREE.Points( geometry, material );
       //adding the point our scene
	scene.add( point );
 }

function createPoint(){
	var vertx = document.getElementById("vertx").value;
	var verty = document.getElementById("verty").value;
	var vertz = document.getElementById("vertz").value;
	var size = document.getElementById("size").value;
        //Default values
	if (vertx == ""){
		vertx = 0;
	}
	if (verty == ""){
		verty = 0;
	}
	if (vertz == ""){
		vertz = 0;
	}
	if (size == ""){
		size = 25;
	}
        addPoint(vertx,verty,vertz,size);
}


Using Ice with Electron[edit]

Once we know how to exchange messages with main.js, we will establish a connection with a server using Ice. The server will be developed in Python and will simply type "Hello World" in the console.

Server

import sys, Ice
import Demo

class PrinterI(Demo.Printer):
    def printString(self, s, current=None):
        print s

try:
    communicator = Ice.initialize(sys.argv)
    adapter = communicator.createObjectAdapterWithEndpoints("SimplePrinterAdapter", "default -p 10000")
    object = PrinterI()
    adapter.add(object, communicator.stringToIdentity("SimplePrinter"))
    adapter.activate()
    communicator.waitForShutdown()
except:
    traceback.print_exc()
    status = 1
    if communicator:
    # correct but suboptimal, see below
        communicator.destroy()

sys.exit(status)

For the client, we're going to use JavaScript and it execute in Electro. Like we saw in the last section, we're going to need communicate with main.js because the ice connection must be realized in the handle system event of our project. So, we must use the electron module IPC.

In this first example, our web application will have a button that every time we press it, a message will be sent to the server using Ice to write "Hello World" in the console.

Client

index.html

<!DOCTYPE html>
<html>
	<head>
		<meta charset=utf-8>
                <script>
                    const electron = require("electron");
                    const {ipcRenderer} = electron
                    //function that send a IPC message to main.js to send ice message to the server.
                    function iceSend (){
                      ipcRenderer.send("async", 1);
                    }
                </script>

		<title>My first ICE example</title>
		<style>
			body { margin: 0; }
		</style>
	</head>
	<body>
           //Call the function
           <button onclick="iceSend()">Try!</button>
        </body>
</html>

main.js

const {ipcMain} = require('electron')
..
//Create the window
..
var Ice = require("ice").Ice;
var Demo = require("./Printer").Demo;
var ic = Ice.initialize();

//Recibe the IPC message and generate Ice conection
ipcMain.on("async",(event,arg) =>{
  Ice.Promise.try(
      function()
    {
        //Conect with our server
        var base = ic.stringToProxy("SimplePrinter:default -p 10000");
        return Demo.PrinterPrx.checkedCast(base).then(
            function(printer)
            {
                //Send Hello World message
                return printer.printString("Hello World!");
            });
    }
    ).finally(
    //Destroy the conection
    function()
    {
        if(ic)
        {
            return ic.destroy();
        }
    }
);});

With this code, we can exchange message with the server using ICE like middleware.

Exchange messages between main.js and our web project[edit]

In Electron, main.js is like a server because contain the JavaScript code to create the window and handle system events. This is that we would run in a normal nodejs application using the command node main.js. However, in Electron we can't communicate directly with main.js from index.html or from any web page of our project. To solve this problem we' re going to use Electron's command Ipc. IPC module is an instance of the EventEmitter[5] class of NodeJs. It gives us the possibility to establish this communication between our web pages and main.js. The code is very easy, in our web pages (index.html) must add the next JavaScript code:

  const electron = require("electron");
  const {ipcRenderer} = electron;
  ipcRenderer.send("async", "Hello World");

And main.js code is the next:

  const {ipcMain} = require('electron');
  ...
  //Create the Window
  ...
  ipcMain.on("async",(event,arg) =>{
  console.log(arg);
  });

With the simple code, we're sending "Hello World" to main.js and when it recive it, will be printed on the console.

Adding a straight line, textures, black background and grid floor[edit]

We're going to modify our Scene 3D to add texture. For that, we're going to use THREE.TextureLoader().load("texture path") to load texture that we want to use. This command is added every object that we want have with texture. In this example, we're going to load texture at the sphere, cube and bell.

Also, we're going to add a grid floor. For this, we use to the THREE.GridHelper object of THREE.js and we indicate the position.

Finally, we want to creat a a straight line that we're going to use THREE.Vector3(x,y,z) for indicate the vertices of our line and THREE.Line(geometry, material) where geometry are the vertices of the line created with THREE.Vector3.

The next video show this example:

Scene 3D[edit]

Once we have seen how Electron works with Hello World project, we're goint to do something more difficult. For that, we're goint to check Aitor's project [6], more concretly the section about WebGL and we will try to replicate this scene in our project (with some modifications for integrate input field). The first thing is to add Three.js and OrbitControls.js in our project's folder to simplify works with WebGL. When we have added these files, we will begin to create our scene which i will describe step by step.

package.json

Like i already indicate in last section, it's goibng to describe our project:

{
  "name": "Scene3D",
  "version": "0.1.0",
  "main": "main.js",
  "scripts": {
    "start": "electron ."
  },
  "dependencies": {
    "electron": "^1.7.9"
  }
}

main.js In this section, we're going to create all things neccessary for execute Electron

'use strict';

const electron = require('electron');
const app = electron.app;
const BrowserWindow = electron.BrowserWindow;

let mainWindow;

//Stop the execution when we close the window
app.on('window-all-closed', function() {
  if(process.platform != 'darwin') {
    app.quit();
  }
});

//Define the window
app.on('ready', function() {
  mainWindow = new BrowserWindow({width: 1280, height: 800});
  mainWindow.loadURL('file://' + __dirname + '/index.html');

  mainWindow.on('closed', function() {
    mainWindow = null;
  });
});

index.html

index.html is going to contain all html code.

<!DOCTYPE html>
<html>
	<head>
		<title>Scene 3D with Electron</title>
	<meta charset="utf-8">
	<script src = "js/three.js"></script>
	<script src = "js/OrbitControls.js"></script>
	<script src = "js/object.js"></script>
	<style>

	#slidecontainer {
	    width: 100%;
	}

	.slider {
	    -webkit-appearance: none;
	    width: 50%;
	    height: 15px;
	    border-radius: 5px;
	    background: #d3d3d3;
	    outline: none;
	    opacity: 0.7;
	    -webkit-transition: .2s;
	    transition: opacity .2s;
	}

	.slider:hover {
	    opacity: 1;
	}

	.slider::-webkit-slider-thumb {
	    -webkit-appearance: none;
	    appearance: none;
	    width: 25px;
	    height: 25px;
	    border-radius: 50%;
	    background: #4CAF50;
	    cursor: pointer;
	}

	.slider::-moz-range-thumb {
	    width: 25px;
	    height: 25px;
	    border-radius: 50%;
	    background: #4CAF50;
	    cursor: pointer;
	}
	</style>
	</head>
	<body onload = "webGLStart()">
		<div id="slidecontainer">
		<input type="range" min="0" max="0,1" value="0,05" class="slider" id="myRange">
		</div>
                <div id="canvas" align = "center">
		</div>
	</body>
</html>

object.js

This file is goning to contend all JavaScript code to create our scene 3D. (In this example only show how it creates a cube 3D)

var camera, scene, renderer, controls;
var cube, floor;
var rotationx = 0.0;
var rotationy = 0.0;
	//Init our scene and camera
			function init() {
				//Create the camera of scene
				camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1000 );
				//Define the camera position
				camera.position.z = 300;
                                camera.position.y = 50;
                                camera.position.x = 100;
				//Create the scene
				scene = new THREE.Scene();
				renderer = new THREE.WebGLRenderer();
				renderer.setSize( window.innerWidth, window.innerHeight);
                                renderer.setClearColor(0xffffff);
				//Indicate the dom element where we want to draw the scene
				document.getElementById("canvas").appendChild( renderer.domElement );
				controls = new THREE.OrbitControls(camera, renderer.domElement);
				window.addEventListener( 'resize', onWindowResize, false );
			}
			function onWindowResize() {
				camera.aspect = window.innerWidth / window.innerHeight;
				camera.updateProjectionMatrix();
				renderer.setSize( window.innerWidth, window.innerHeight );
			}
			//Generate animate for our objects
			function animate() {
				requestAnimationFrame( animate );
				cube.rotation.x += 0.05 + rotationx;
				cube.rotation.y += 0.01 + rotationy;
				renderer.render( scene, camera );
			}

                       //Create a plane
                      function addPlane(){
                        var plane = new THREE.PlaneBufferGeometry(1000 , 1000, 10,10);
                        var material = new THREE.MeshBasicMaterial({color:0x33FF00, side: THREE.DoubleSide});
                       floor = new THREE.Mesh(plane, material);
                       floor.position.y = floor.position.y - 20;
                       floor.rotation.x = Math.PI/2;
		       //Add plane in the scene
                      scene.add(floor);
                     }

                    //Create a Cube
                    function addCube(){
                        var geometry = new THREE.BoxBufferGeometry( 20, 20, 20 );
		        var material = [new THREE.MeshBasicMaterial({color:0x00BB00}),
                                               new THREE.MeshBasicMaterial({color:0xAA000F}),
                                               new THREE.MeshBasicMaterial({color:0xCC0000}),
                                               new THREE.MeshBasicMaterial({color:0xFF00CC}),
                                               new THREE.MeshBasicMaterial({color:0x77FFCC}),
                                               new THREE.MeshBasicMaterial({color:0x77CC00})];
		        cube = new THREE.Mesh( geometry, material );
                        cube.position.set( 50, 0, 50 );
			//Add a Cube in our scene
			scene.add(cube);
                      }
			
                   //Function that we call when start the project
                   function webGLStart (){
                   //Init the scene
                   init();
                   //Create the plane and the cube, and add in our Scene
                   addPlane();
                   addCube();
                  //Animate the scene (cube rotation)
  		   animate();
	           //Slider that indicates the speed of the animation
                   var slider = document.getElementById("myRange");
                   slider.oninput = function() {
                   rotationx = this.value/1000;
                   rotationy = this.value/1000;
                  }
               }	


Below is a video with this example, adding an object selector to change the object to show in our scene

Hello World with Electron[edit]

In this first example to begin to use a web technology without need a web browser, we're going to create a Hello World project. For this, we will need to create a three files (package.json, index.html and main.js). Package.json descrive our project (name, version, script's name and folder, aplication with it must be run, etc), Index.html is the html code our project. The last file is the code JavaScript which we will create a window where will be showing our code html (not be a web browser).

Finally, when we have already finished our project, we need install and start Electron[7] in our project. For this, we must execute npm install and npm start commands in the shell inside of project folder. After this, the node_module folder and its contents will have been created and a new window will have been opened with our html.

Firsts Steps in Web techbology without Web Browser[edit]

We're going to create a computer aplication using web technology. To run this aplication, we don't need to use a web browser because we will use Electron platform. Electron permite us build cross platform desktop apps with JavaScript, HTML, and CSS. To can use it, we must have install in our computers Node JS [8] and the package manager for JavaScript npm [9]


Question & Answer forum


To create this forum, we're going to use AskBot [10]. It's an open source Question & Answer forum project and is writting in Python on top of the Django plataform. The objective will be integrate this forum at the JdeRobot platform for the students expose their questions and anybody can answer.


Logo and Simple changes of design[edit]

AskBot allow to change in its design. For this, we can use two methods, one more simple used to live settings of moderate user or one more difficult using development AskBot repository. For the last option, we'll have to install AskBot again, using development repository[11]. We're going to use the first method (with live settings) for this first prototype.

We're going to modify the logo, header and delete the footer. For this, we have to sign in with moderate user (root user) and enter the design section of configurate, where we can modify the logo, CSS, HTML or Footer.


Email Notification[edit]

To recive an email notification, we'll have to configure the email server in our Django proyect (this is complety independient of AskBot). I'm going to use a smtp server of Google[12] for I can use to gmail. These modify must be done in archive "settings.py" our Django proyect.

One time we have configure email server and we have registred with an user and email, we can choose to notification we want to recive.


Using MySQL[edit]

One time we have looked at how to create our first askbot, we're going to change the database. For that, we must have installed MySQL and have a user root for we can create our MySQL database. When we have created it, we're going to follow the same steps of the before point.

Install and First prototype[edit]

To install AskBot, we're going to follow the Documentation in the official site of Askbot[13]. In this first prototype, we're going to use SQLite database (default database of Django plataform).


Initial Requeriments[edit]

We're going to use python 2.7, Django 1.11.7 and pip for install python packages. Askbot is goning to use in Ubuntu 16.04 and the database is going to use MySQL. We're going to assumes that we have to everything installed.