ROS interface

This page introduces the ROS process we developed in order to read depth images from a rosbag and use them as input to the Loihi chip.

Recording data

Before using the framwork, we first need to record data inside a rosbag. The point of using rosbags is to provide a more generic way of capturing depth data. This means that you could use any depth camera that works with ROS even if we are using an Azure kinect here.

Setup the depth camera

First you need to setup the depth camera as demonstrated in the Calibration section. Then, we are gonna use the capture_pcl package to stream the depth images from the camera. This package performs severals operation on the raw depth obtained by the kinect (filtering, depth enhancing) and produces a refined stream of images to the ROS topic /depth_perception/depth. In order to use the package, you should setup the parameters within the launch file with the same one you used in the calibration section :

<launch>
   #use this param if you want to transform the image
   <arg name="hom" default="true"/>
   <param name="homography" value="$(arg hom)" />
   <group if="$(arg hom)">
      <rosparam file="$(find capture_pcl)/config/homography.yaml" />
   </group>
   <param name="init_params" type="bool" value="true" />
   <rosparam param="crop_min_x">-600.919</rosparam>
   <rosparam param="crop_max_x">-140.916</rosparam>
   <rosparam param="crop_min_y">-358.34</rosparam>
   <rosparam param="crop_max_y">402.577</rosparam>
   <rosparam param="crop_min_z">-600.919</rosparam>
   <rosparam param="crop_max_z">26.9382</rosparam>
   <rosparam param="ax">1.34782</rosparam>
   <rosparam param="bx">809.928</rosparam>
   <rosparam param="ay">1.10393</rosparam>
   <rosparam param="by">395.583</rosparam>
   <rosparam param="az">1.59272</rosparam>
   <rosparam param="bz">957.095</rosparam>
   <node name="capture_pcl" pkg="capture_pcl" type="capture_pcl_node" output="screen"/>
</launch>

If you want to use the homography matrix to align the image to the DVS frame, you should place the file generated by the calibration inside the config folder. Once the parameters are set, you can compile the code :

#assuming you placed the capture_pcl pkg inside "your_ROS_workspace/src/"
cd "your_ROS_workspace"
catkin_make

Then execute the package :

roslaunch capture_pcl capture_depth.launch

Rosbags

To record the streamed depth images inside a rosbag, open a new terminal :

rosbag record /depth_perception/depth

The recording stops once you hit Ctrl-C on the command you just typed. You should see a file named “current-date-time”.bag. You can rename it to your liking. It is often necessary to modify (cut, change time…) of the recording. I invite you to check both documentation on rosbag here and here

Loihi process to use the rosbags

In order to use the process that will retrieve the depth images from the rosbags and transform them as inputs to Loihi, you need to install the python rosbags library :

#activate the virtualenv where you installed LAVA
source neuro/bin/activate
pip install rosbags
pip install rosbags-image

There are 2 processes that can be used to read images from rosbags. You can find the folder containing the code here. When using the code, include the folder next to the python script that run the simulation.

The process.py is in charge to simply read images from the rosbags and send it through a port :

import sys
import numpy as np
import time
from threading import Thread
import cv2
from lava.magma.core.decorator import implements, requires, tag
from lava.magma.core.model.py.model import PyLoihiProcessModel
from lava.magma.core.model.py.ports import PyOutPort
from lava.magma.core.model.py.type import LavaPyType
from lava.magma.core.process.ports.ports import OutPort
from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.resources import CPU
from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from pathlib import Path
from rosbags.highlevel import AnyReader
from rosbags.typesys import Stores, get_typestore
from rosbags.image import message_to_cvimage

class BagReader(AbstractProcess):
   """
    Process that read depth images from ros bag file and send them through outport 

    Parameters
    ----------
    filename: str
        String to filename
    topic: str
        Topic of the bag from where to retrieve the datas.
    sensor_shape: tuple
        Shape of the image / kinect camera.
    """
   def __init__(
            self,
            depth_shape: tuple,
            filename: str = "",
            topic: str = "",
            resize: tuple = None) -> None:
      if filename == "":
         raise ValueError(
            "you must provide the path of a bag"
         )
      if topic == "":
         raise ValueError(
            "you must provide a topic to listen to"
         )
      self.filename = filename
      self.topic = topic
      self.resize = resize
      if resize is not None:
         self.depth_shape = resize
      else:
         self.depth_shape = depth_shape
      self.depth_out_port = OutPort(shape=self.depth_shape)
      super().__init__(
         depth_shape=self.depth_shape,
         filename=self.filename,
         topic=self.topic,
         resize=self.resize
      )

#class implementing the bag reader
@implements(proc=BagReader, protocol=LoihiProtocol)
@requires(CPU)
class LoihiDensePyBagReader(PyLoihiProcessModel):
   depth_out_port: PyOutPort = LavaPyType(PyOutPort.VEC_DENSE, int)

   def __init__(self, proc_params: dict) -> None:
      super().__init__(proc_params)

      self._file_name = proc_params["filename"]
      self._path = Path(self._file_name)
      self._typestore = get_typestore(Stores.ROS1_NOETIC) 
      self._topic = proc_params["topic"]
      self._depth_shape = proc_params["depth_shape"]
      self._resize = proc_params["resize"]
      self._ind_f, self._ind_l = self._init_bag()
      self._current_ind = self._ind_f

   def run_spk(self):
      #print("get image")
      depth_image = self._get_bag_frame()
      self.depth_out_port.send(depth_image)
      self._current_ind += 1

   def _get_bag_frame(self):
      img = None
      if self._current_ind >= self._ind_l:
         self._current_ind = self._ind_l
      with AnyReader([self._path], default_typestore=self._typestore) as reader:
         connections = [x for x in reader.connections if x.topic == self._topic]
         for connection, timestamp, rawdata in reader.messages(connections=connections):
               msg = reader.deserialize(rawdata, connection.msgtype)
               if msg.header.seq == self._current_ind:
                  img = message_to_cvimage(msg)
                  if self._resize is not None:
                     img = cv2.resize(img, (self._depth_shape[1],self._depth_shape[0]),  interpolation = cv2.INTER_NEAREST)
               
      return img

   #return first and last index of msg (seq doesn't start at 0)
   def _init_bag(self):
      first = True
      ind_f = 0
      count = 0
      with AnyReader([self._path], default_typestore=self._typestore) as reader:
         count = reader.message_count
         connections = [x for x in reader.connections if x.topic == self._topic]
         for connection, timestamp, rawdata in reader.messages(connections=connections):
            msg = reader.deserialize(rawdata, connection.msgtype)
            #print(msg)
            if first:
               ind_f = msg.header.seq
               break
      ind_l = ind_f + count-1

      return ind_f,ind_l
   
   def _pause(self):
      """Pause was called by the runtime"""
      super()._pause()
      self.t_pause = time.time_ns()

   def _stop(self) -> None:
      """Stop was called by the runtime"""
      super()._stop()

The topic name is the one you recorded data from (/depth_perception/depth). The sensor_shape parameters is the image size of the depth. In our case, the capture_pcl package set up the images at 1204x642. You can of course change that parameter inside the capture_pcl code.This process get an index of the rosbags and is in charge of sending the images one by one through the output port. The second process rest on the same principle. However, we include the possibility to sync the image according to data entering the process. This is essential if you want to synchronize the kinect and the Prophesee together since they run on different time scale. Basically, the Prophesee process (next section) sends timestamps to this rosbag process which in turn decides the flow of images to send based on the timestamps received.

In the tutorial section, we will demonstrate how to use the rosbags process in standalone or sync with the prophesee camera.