Vision in robotics
This tutorial is going to show how to integrate industrial robots with cameras. This is a commonly used setup in pick-and-place applications. In our case, we will be adding a static simulated camera and a box object to the scene. Afterwards, we will subscribe to messages from the simulated camera and process them using the OpenCV library.
Adding a simulated camera to Gazebo
Creating a new package
Create a new package in the src
folder of the fanuc_ros
workspace. Name it camera_exercise
and add the following dependencies:
- roscpp
- std_msgs
- sensor_msgs
- cv_bridge
- image_transport.
catkin_create_pkg camera_exercise roscpp std_msgs sensor_msgs cv_bridge image_transport
Adding a camera description to the package
Create a folder in the newly-created package and name it urdf
. This folder will contain additional models:
- a camera model, and
- a box (or any other object for pick-and-place).
Create a camera.xacro
file in the urdf
folder and paste into it the following camera description.
<robot name="camera">
<link name="world"/>
<joint name="world_joint" type="fixed">
<origin xyz="0.2 0 1" rpy="-1.5707 1.5707 0"/>
<parent link="world"/>
<child link="camera_link"/>
</joint>
<link name="camera_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0.05 0.05 0.05"/>
</geometry>
</visual>
<inertial>
<mass value="1e-5"/>
<origin xyz="0 0 0" rpy="0 0 0"/>
<inertia ixx="1e-6" ixy="0" ixz="0" iyy="1e-6" iyz="0" izz="1e-6"/>
</inertial>
</link>
<gazebo reference="camera_link">
<static>true</static>
<turnGravityOff>true</turnGravityOff>
<sensor type="camera" name="camera">
<update_rate>30.0</update_rate>
<camera name="head">
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
<noise>
<type>gaussian</type>
<mean>0.0</mean>
<stddev>0.0</stddev>
</noise>
</camera>
<plugin name="camera_controller" filename="libgazebo_ros_camera.so">
<alwaysOn>true</alwaysOn>
<updateRate>0.0</updateRate>
<cameraName>camera</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
<frameName>camera_link</frameName>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
</plugin>
</sensor>
</gazebo>
</robot>
This xacro file uses a Gazebo plugin to simulate a camera.
Adding a box description to the package
Add another file to the urdf
folder and name it box.urdf
. Paste the following description of a green box into the file.
<robot name="box">
<link name="my_box">
<inertial>
<origin xyz="0 0 0" />
<mass value="0.1" />
<inertia ixx="1e-4" ixy="0.0" ixz="0.0" iyy="1e-4" iyz="0.0" izz="1e-4" />
</inertial>
<visual>
<origin xyz="0 0 0"/>
<geometry>
<box size="0.014 0.014 0.1" />
</geometry>
</visual>
<collision>
<origin xyz="0 0 0"/>
<geometry>
<box size="0.014 0.014 0.1" />
</geometry>
</collision>
</link>
<gazebo reference="my_box">
<material>Gazebo/Green</material>
</gazebo>
<gazebo reference="my_box">
<mu1>1.00</mu1>
<mu2>1.00</mu2>
</gazebo>
</robot>
OpenCV with ROS
OpenCV is an extremely powerful image processing library. ROS includes a package, cv_bridge
, that allows us to seamlessly use OpenCV with ROS Image messages.
Add a new file into the src
folder and name it image_processing.cpp
. Paste the following code into the file.
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
using namespace std;
class ImageConverter
{
ros::NodeHandle nh_;
image_transport::ImageTransport it_;
image_transport::Subscriber image_sub_;
image_transport::Publisher image_pub_;
public:
ImageConverter() : it_(nh_)
{
// Subscrive to input video feed and publish output video feed
image_sub_ = it_.subscribe("/camera/image_raw", 1,
&ImageConverter::imageCb, this);
image_pub_ = it_.advertise("/camera/output_video", 1);
}
void imageCb(const sensor_msgs::ImageConstPtr &msg)
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception &e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
// Convert the image to HSV color space, filter, convert back to BGR
GaussianBlur(cv_ptr->image, cv_ptr->image, Size(5, 5), 0, 0);
cvtColor(cv_ptr->image, cv_ptr->image, CV_BGR2HSV);
inRange(cv_ptr->image, Scalar(40, 0, 0), Scalar(100, 255, 255), cv_ptr->image);
// Find contours and mark them in red
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(cv_ptr->image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
cvtColor(cv_ptr->image, cv_ptr->image, CV_GRAY2BGR);
for (int i = 0; i < contours.size(); i++)
{
Scalar color = Scalar(0, 0, 255);
drawContours(cv_ptr->image, contours, i, color, 2, 8, hierarchy, 0, Point());
}
// Output modified video stream
image_pub_.publish(cv_ptr->toImageMsg());
}
};
int main(int argc, char **argv)
{
ros::init(argc, argv, "image_converter");
ImageConverter ic;
ros::spin();
return 0;
}
This code subscribes to the /camera/image_raw
topic and publishes a modified image to the /camera/output_video
topic. Explore the image processing pipeline by commenting out various lines!
Compiling OpenCV code
Compiling OpenCV code requires modifications to the CMakeLists.txt
file of the package. The main things that need to be modified/added are:
find_package(OpenCV REQUIRED)
add_executable(image_processing src/image_processing.cpp)
target_link_libraries(image_processing ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} )
cmake_minimum_required(VERSION 3.0.2)
project(camera_exercise)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
cv_bridge
roscpp
sensor_msgs
std_msgs
image_transport
)
find_package(OpenCV REQUIRED)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless:
## * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
## * add "message_generation" and every package in MSG_DEP_SET to
## find_package(catkin REQUIRED COMPONENTS ...)
## * add "message_runtime" and every package in MSG_DEP_SET to
## catkin_package(CATKIN_DEPENDS ...)
## * uncomment the add_*_files sections below as needed
## and list every .msg/.srv/.action file to be processed
## * uncomment the generate_messages entry below
## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
## Generate messages in the 'msg' folder
# add_message_files(
# FILES
# Message1.msg
# Message2.msg
# )
## Generate services in the 'srv' folder
# add_service_files(
# FILES
# Service1.srv
# Service2.srv
# )
## Generate actions in the 'action' folder
# add_action_files(
# FILES
# Action1.action
# Action2.action
# )
## Generate added messages and services with any dependencies listed here
# generate_messages(
# DEPENDENCIES
# sensor_msgs# std_msgs
# )
################################################
## Declare ROS dynamic reconfigure parameters ##
################################################
## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
## * add "dynamic_reconfigure" to
## find_package(catkin REQUIRED COMPONENTS ...)
## * uncomment the "generate_dynamic_reconfigure_options" section below
## and list every .cfg file to be processed
## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
# cfg/DynReconf1.cfg
# cfg/DynReconf2.cfg
# )
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES camera_exercise
# CATKIN_DEPENDS cv_bridge roscpp sensor_msgs std_msgs
# DEPENDS system_lib
)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
# add_library(${PROJECT_NAME}
# src/${PROJECT_NAME}/camera_exercise.cpp
# )
## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
add_executable(image_processing src/image_processing.cpp)
## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
target_link_libraries(image_processing
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
)
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# catkin_install_python(PROGRAMS
# scripts/my_python_script
# DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )
## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
# DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
# FILES_MATCHING PATTERN "*.h"
# PATTERN ".svn" EXCLUDE
# )
## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
# # myfile1
# # myfile2
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )
#############
## Testing ##
#############
## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_camera_exercise.cpp)
# if(TARGET ${PROJECT_NAME}-test)
# target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()
## Add folders to be run by python nosetests
# catkin_add_nosetests(test)
Creating a .launch
file
In the end, we will create a launch file that will:
- run the
demo_gazebo.launch
file, - spawn the camera,
- spawn the object (box), and
- run the
image_processing
node.
The contents of the file are shown below. Name it camera.launch
.
<launch>
<include file="$(find fanuc_lrmate200id_moveit_config)/launch/demo_gazebo.launch"/>
<param name="camera_description" command="$(find xacro)/xacro --inorder '$(find camera_exercise)/urdf/camera.xacro'"/>
<node name="camera_spawn" pkg="gazebo_ros" type="spawn_model" output="screen" args="-urdf -param camera_description -model camera -x 0 -y 0 -z 0"/>
<param name="box_description" command="cat '$(find camera_exercise)/urdf/box.urdf'"/>
<node name="box_spawn" pkg="gazebo_ros" type="spawn_model" output="screen" args="-urdf -param box_description -model box -x 0.2 -y 0.3 -z 0.1"/>
<node name="image_processing" pkg="camera_exercise" type="image_processing" output="screen"/>
</launch>
Launching
Compile the package by catkin_make
in the fanuc_ros
workspace root. Source the setup file.
source devel/setup.bash
And finally, launch the project.
roslaunch camera_exercise camera.launch
Playing around
Try to pick up the spawned object manually. Then play around with the following:
- masses and inertias,
- friction coefficients,
- using effort interfaces instead of position interfaces for the gripper fingers,
- ...
Playing with a USB camera
USB cameras are supported by the usb-cam package that can be installed with the following.
sudo apt-get install ros-melodic-usb-cam
To run the image_processing.cpp
code with a real camera, run the following in separate terminals:
roscore
rosrun usb_cam usb_cam_node usb_cam/image_raw:=camera/image_raw
rosrun rqt_image_view rqt_image_view
And then, after sourcing the package again:
rosrun camera_exercise image_processing
Play around with the OpenCV code, for example, to detect a particular real-life object.