Difference between revisions of "2D Camera Integration"

From Wiki
m (typo)
 
(28 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Different cameras can be integrated in the CPRog software. This allows to pick parts with a not precisely defined position.
+
The robot control supports object recognition and video cameras. Object recognition cameras are used to detect object types and positions relative to the robot while video cameras provide images for observing the robot.
  
Currently plugins are available for:
+
Currently the following cameras are supported:
* IFM O2D, a compact camera with integrated image processing
+
* ifm O2D200 and O2D500 object recognition cameras
* Matrox camera systems
+
* [[Remote_Variable_Access#Protocol|Cameras that can send TCP/IP messages in the same format as the O2D]]
 +
* USB video cameras - these do not provide object info or positions!
  
Both systems aquire an image and process it internally. Then they send the target position via Ethernet to the CPRog software. Here they are received by an plugin and stored in a variable. This variable can be used in a motion.
+
This article explains how to set up the O2D200 and O2D500 cameras. [https://youtu.be/OJcislaY9Ek There also is a video (in German) that shows the entire process for the ifm O2D200].
  
'''German documentation is available here (german):
 
[https://www.cpr-robots.com/download/Camera/Dokumentation_PlugIn_IFM-O2D_V04.pdf Dokumentation_PlugIn_IFM-O2D_V04.pdf (DE)] The english version of that same documentation is being worked on at the moment.
 
'''
 
 
----------------------------------------
 
 
 
 
 
 
 
'''UNDER CONSTRUCTION. Currently the English translation of the documentation linked above is being generated'''
 
 
=Scope=
 
This document explains the use of an IFM 3D camera in CPRog.
 
 
=Safety=
 
=Safety=
 
* [[file:Caution.png|20px]] Caution! Personal safety has to be ensured during operation.
 
* [[file:Caution.png|20px]] Caution! Personal safety has to be ensured during operation.
Line 30: Line 16:
 
=Mechanical and electrical setup=
 
=Mechanical and electrical setup=
 
[[file:IFM_camera_vectors.png|thumb|right|600px|Preferred camera mounting position]]
 
[[file:IFM_camera_vectors.png|thumb|right|600px|Preferred camera mounting position]]
The camera has to be mounted at ample height above the work-pieces. Care has to be taken to avoid any collisions. For acceptance cone and minimum distance from camera, please look at the IFM-Documentation.
+
Please refer to the camera's documentation on how to integrate it.
  
*The camera should be mounted parallel to the coordinate axes of the robot, if at all possible. - That means either overhead, in front or from the side. This will simplify calibration significantly (see image on the right).
+
*Be careful to avoid collisions by the robot.
 +
*Consider the acceptance cone and minimum distance of the camera.
 +
*The camera should be mounted parallel to the coordinate axes of the robot if possible. - That means either overhead, in front or from the side at an angle of 0°, 90° or 180°. This will simplify calibration significantly (see image on the right).
  
*Power requirements: Connect the leads of the black power cable to a 24V supply:
+
=Camera configuration=
** blue lead: GND
+
The ifm O2D200 and O2D500 cameras use different tools for configuring the camera. The following sections only show a step by step summary, please read the camera's documentation for further info.
** brown lead: 24V (please verify the pin configuration with the IFM-Dokumentation).
 
<br clear=all>
 
  
=PC Network Configuration=
+
== ifm O2D200 ==
[[file:IFM_camera_PC_IP_config.png|thumb|600px|right|PC network address configuration]]
+
[[file:IFM_camera_configuration_E2D200.png|thumb|right|600px|IFM Camera TCP/IP setings]]
*Plug ethernet cable into camera and PC.
 
*Set IP of PC to a free IP in the <code>192.168.0/24</code> range, e.g. <code>192.168.0.50</code>
 
*Camera IP is factory configured for <code>192.168.0.49</code>. So do not use this particular IP on your computer. It belongs to the camera already and you don't want them to be the same.
 
<br clear=all>
 
  
=Camera configuration=
+
The image processing is done entirely in camera. To recognize a workpiece an "application" has to be set up containing the model of the workpiece. The camera is configured using the IFM Software efector dualis E2D200.
[[file:IFM_camera_configuration_E2D200.png|thumb|right|600px|IFM Camera TCP/IP setings]]
 
  
The image processing is done entirely in camera. To recognize a work-piece an "application" has to be set up containing the model of the work-piece. The camera settings are configured using the IFM Software efector dualis E2D200.
+
Communication with the robot control requires the following settings:
 +
* TCP/IP settings:
 +
** Without embedded control: Keep the factory defaul IP address or use one of your choice in the network of your computer.
 +
** With Raspberry Pi-based embedded control: 192.168.3.49 or a different address in that same network but not 192.168.3.11. Since this ethernet port is used for connecting the computer you may need to use a network switch so you can connect both.
 +
** With Phytec-based embedded control:
 +
*** Ethernet port 2 (recommended): 192.168.4.49 or a different address in that same network but not 192.168.4.11.
 +
*** Ethernet port 1: See Raspberry Pi-based control.
 +
After changing the IP address you may need to update the IP address of your computer as described in section [[2D_Camera_Integration#PC Network Configuration|PC Network Configuration]] otherwise you may not be able to connect to the camera.
  
Communication with the CPRog-Plugin requires the following settings:
 
* General TCP/IP settings: The TCP/IP address and port configuration have to remain the IFM factory defaults.
 
 
* An [[media:TestConfig_IFM-O2D_CPR.zip|example camera configuration]] can be downloaded here - it is set up to recognize a 2 Euro coin in our lighting conditions and camera set up. (Your mileage may vary. This file is provided to make sure that parameters, such as IP and result output are correct.) To get it to actually recognize a coin, read the IFM documentation. The file has to be unzipped before loading it by right clicking on an unused folder in the "Applications" dialogue and selecting "Download to Sensor" followed by right clicking on the same folder and selecting "Activate". Not that the IP configuration of the camera is also updated (to the defaultsettings) when doing that (192.168.0.49).
 
* An [[media:TestConfig_IFM-O2D_CPR.zip|example camera configuration]] can be downloaded here - it is set up to recognize a 2 Euro coin in our lighting conditions and camera set up. (Your mileage may vary. This file is provided to make sure that parameters, such as IP and result output are correct.) To get it to actually recognize a coin, read the IFM documentation. The file has to be unzipped before loading it by right clicking on an unused folder in the "Applications" dialogue and selecting "Download to Sensor" followed by right clicking on the same folder and selecting "Activate". Not that the IP configuration of the camera is also updated (to the defaultsettings) when doing that (192.168.0.49).
 
(Communication with TinyCtrl on an embedded Linux computer requires a different IP address due to the default configuration of that embedded Linux computer, which is itself configured for IP 192.168.4.11 by default. Hence the camera would have to be in the same IP range, e.g. 192.168.4.49. Initially, to simplify the setup, the camera configuration will always be done in Windows and at the very end the IP will be reconfigured to match the IP range of the network adapter in the Linux embedded computer.)
 
 
<br clear=all>
 
<br clear=all>
  
 
[[file:IFM_camera_configuration_E2D200_1.png|thumb|right|600px|IFM Protocol Version settings]]
 
[[file:IFM_camera_configuration_E2D200_1.png|thumb|right|600px|IFM Protocol Version settings]]
*The Protocol Version has to remain in the factory default setting.
+
*The Protocol Version must be V2 and the output format ASCII.
 
<br clear=all>
 
<br clear=all>
  
Line 68: Line 52:
 
** "Result Output" to On
 
** "Result Output" to On
 
** "Model Detail Output" to On  
 
** "Model Detail Output" to On  
 
+
** "Start string" to "start"
 +
** "Stop string" to "stop"
 +
** "Separator" tp "#"
 +
** "Image output" may be On or Off. Set this to "On" if you want to be able to see the camera's image in CPRog/iRC, otherwise "Off" is recommended
 +
** "Image format" to "Windows bitmap"
 
<br clear=all>
 
<br clear=all>
  
=Plugin Installation=
+
Make sure that the camera recognizes your object reliably enough before proceeding to the next step: [[2D_Camera_Integration#Setting_up_the_Robot_Control|Setting up the Robot Control]].
The plugin has to be installed in the Data\Plugins folder as described below for CPRog on windows and also for the embedded computer (Linux). If the communication between your robot and the Windows PC is via the white PCAN-USB adapter, you just need to follow the CPRog instructions and can skip over the TinyCtrl instructions. If not, you have an embedded computer built into your robot or the control cabinet. The software running on that embedded computer is called TinyCtrl. In that case you need to follow both, the CPRog and the TinyCtrl instructions below.
 
===CPRog===
 
A plugin is required to use the IFM camera. It connects via ethernet to the camera and translates and transmits the data received from the camera to the robot control electronics.
 
  
Download, unzip and copy the following files in the zip file below to <code>C:\CPRog\Data\Plugins\</code>
+
== ifm O2D500 ==
* [[media:ConfigSmartCameraIFMO2D.zip]]
+
The ifm O2D500 provides different operating modes, two of which are supported by our robot control:
* (you will have received these files from CPR)
+
* Contour presence control
 +
* Advanced application
  
An example program ([[media:Example_IFM_Camera.zip|Example_IFM_Camera.xml]]) can be saved to <code>C:\CPRog\Data\Programs\</code> and unzipped.
+
By default contour presence control does not send object IDs. This may be relevant if your robot needs to distinguish objects. An alternative communication preset can be loaded to include the object IDs.
  
===TinyCtrl===
+
'''Make sure you are using at least firmware version 1.27.9941''' (see the title bar of the ifm configuration tool after connecting to the camera).
For TinyCtrl only the xml file [[media:ConfigSmartCameraIFMO2D.xml]] is required.  
 
  
It needs to be copied into the folder <code>/home/root/TinyCtrl/Data/Plugins</code> on the embedded computer.  
+
=== Protocol ===
 +
Make sure the process interface protocol is set to V3: Open Device Setup -> Interfaces and check the setting of "Process interface version".
  
After that the robot has to be restarted. [[FTP_and_putty_Access|Here]] is a guide that explains how to copy the file.
+
=== Creating a Coutour Presence Control Application ===
 +
The fastest way to set up a simple image recognition application is via the contour presence control assistant. Simply follow the steps as shown in the screenshots and enter the following settings:
 +
* in step 2: Set trigger to continuous or process interface (request images via the "Trigger Camera" robot program command).
 +
* in step 7: Hint: consider setting "Orientation" to -180 - 180 so that the object is recognized in any orientation.
 +
* in step 8: Set output interface to Ethernet
 +
* in step 9: Enable "Model result" and "ROI results", "Object results" is optional. Set "Start" to either "star" or "start", "Delimiter" to "#" and "End" to "stop"
  
The IP address in the xml file in the link needs to be 192.168.4.49.
+
<gallery>
 +
File:IfmO2D500CountourAssistantEN_00.png|Setting up contour presence control
 +
File:IfmO2D500CountourAssistantEN_01.png|Step 1
 +
File:IfmO2D500CountourAssistantEN_02.png|Step 2
 +
File:IfmO2D500CountourAssistantEN_03.png|Step 3
 +
File:IfmO2D500CountourAssistantEN_04.png|Step 4
 +
File:IfmO2D500CountourAssistantEN_05.png|Step 5
 +
File:IfmO2D500CountourAssistantEN_06.png|Step 6
 +
File:IfmO2D500CountourAssistantEN_07.png|Step 7
 +
File:IfmO2D500CountourAssistantEN_08.png|Step 8
 +
File:IfmO2D500CountourAssistantEN_09.png|Step 9
 +
File:IfmO2D500CountourAssistantEN_11.png|Step 11
 +
</gallery>
  
=Use of the plugin within a robot program=
+
==== Coordinate Types ====
[[file:CPRog_Program_Editor_Camera_PLugin_Commands.png|thumb|600px|Commands to get the camera up and running]]
 
Three commands have to be inserted in the Program editor to get the camera up and running. These commands can be added via the menu item "Advanced Commands"
 
  
*In the first line a position variable is defined. This variable is used to store the target position.
+
By default the camera sends image coordinates (pixel positions) which then need to be transformed by the robot control, see section [[2D_Camera_Integration#Setting_up_the_Robot_Control|Setting up the Robot Control]] on how to do the calibration. Alternatively the camera can be calibrated to do the coordinate transformation and send positions relatively to the robot in mm:
*in the second line a Number variable is defined. This variable is used to store the model type. Once the camera has recognized an object it is set to <code>-1</code>.
 
*The command "PlugInTargetPos" calls a plugin that returns a target position. It takes the following arguments
 
** Name of the plugin to be used (here "PluginSmartCameraIFMO2D").
 
**a variable for the target position
 
**a variable for the model type.
 
*The plugin overwrites only the X, Y, Z and A (=rotation around Z) values. The rotation values B and C remain as they were. Therefore, upon definition of the variable the robot should already have been positioned so that it can grip the object with the current B and C values.
 
  
The [[media:Example_IFM_Camera.zip|example program]] uses the plugin. It moves the robot to a starting position and asks the plugin for a target position.
+
# Select your camera application and click "Edit application" below the "Application details" at the right side.
[[file:Example_IFM_Camera.png|Example Program]]
+
# If asked choose to change to advanced configuration (you can't use the assistant for this application afterwards).
 +
# Open the "Images & trigger" configuration, then find "Calibration" button in the "Trigger & general" section.
 +
# Follow the "Robot sensor calibration" wizard. Please refer to the camera's documentation if any issues occur.
  
When the variable "modelclassvar" becomes <code>-1</code> a small up and down movement is carried out.
+
The camera by default still sends the pixel position. To send the cartesian position you need to set the "Robot Coordinates" interface preset as explained in the next section.
  
Otherwise the robot will move to the target position by using the command "LinearByVariable".
+
==== Model ID and Interface Preset ====
  
==Plugin Configuration Dialogue==
+
By default the contour presence control mode does not send model IDs. If you need to distinguish objects change to an advanced application as described in the previous section. Then follow these steps:
In general, a new program should always be run in simulation prior to using it on the actual robot. To run the program in simulation simply click the plug button in CPRog to disconnect the robot. As soon as the Status light in CPRog changes (from red or green) to grey, you are in simulation mode.
+
# Open the Interfaces configuration of your camera application
 +
# In the TCP/IP section check if the preset for igus robots is available. [[File:Ifm_O2D500_Presets_for_iRC.zip|Otherwise you can get it here]].
 +
# Select the preset for image coordinates unless you calibrated for robot coordinates and save the application.
  
When running the program for the first time, the plugin configuration window opens. '''Do not close this window. Keep it open! If an error message appears, confirm the error message with "OK", "yes" or "no"''' Here the camera position needs be configured.  
+
=== Creating an Advanced Application ===
 +
Creating an advanced application may be the faster approach if you know what you're doing or if you want to define multiple objects.
  
[[file:IFM_camera_config.png]]
+
# Create a new application and select advanced application. There is no wizard to follow.
 +
# Calibrate for robot coordinates as explained in the section "Coordinate Types" above.
 +
# Load the interface preset as explained in the section "Model ID and Interface Preset" above.
 +
# Define your models
 +
# Save the application
  
* This is the place where the '''IP-Address''' of the camera, which has been left at the default setting earlier in this how-to (192.168.0.49), has to be entered.
 
* The default settings in this configuration window are identical to the IFM defaults.
 
  
* '''Pixel-to-coordinate-conversion''': The camera outputs the object position results in pixels and hands it over to the plugin. To calculate the robot target position the pixel values have to be multiplied with the scaling factor. The scaling factor is dependent on the distance between camera and the surface the camera looks at (see next section).
+
=Setting up the Robot Control=
 +
[[File:Camera_Configuration.PNG|thumb|right|600px|Configuration in CPRog/iRC]]
  
* '''Geometry Setup''': The position of the camera has to be defined relative to the robot. This is necessary to transfer the target position (in camera pixels) into an absolute position in the robot coordinate system.
+
Now that the camera is ready to deliver object model and position information the robot control needs to be set up to connect to the camera and calculate the object/pick positions in 3D space.
  
* '''Origin''': The position of the camera in the robot coordinate system (x/y/z value).
+
* Start CPRog/iRC and connect it to the robot if you are using one with embedded control.
 +
* Open the camera configuration: File -> Configure Interfaces -> Cameras
 +
* Add a new camera of type "IFM O2D".
  
* '''Look Vec(tor)''': The viewing direction of the camera. In the image above, the camera looks down onto the work surface that the robot sits on, hence it looks in negative Z direction.
+
The configuration area should look similar to the screenshot on the right.
* '''Up Vec(tor)''': This vector defines the rotation of the camera around the target axis, i.e. top and bottom in a 2D image.
 
* '''Pick Distance''': Is the distance between the top of the work-piece and the camera.
 
  
*'''Example''': Work-piece located in the center below the camera would produce a target position (120, 120, 35)
+
'''Please note:''' If you change camera settings, e.g. your camera application, the robot may continue receiving values by the old configuration until reconnected. To do this simply click "Apply" in the interface configuration of CPRog/iRC.
  
*'''Test Configuration''': For testing purposes the plugin can be configured so that it produces a target position, even when the camera is disconnected. This is so that a program can be tested in simulation mode: When the tick before "Override Camera" is set target position and model class defined in the line below is returned by the plugin.
+
===General Settings===
*'''Status''': Here the status of the camera connected is shown, i.e. the connection status and the current target position in pixels and mm.
+
These settings refer to the connection to the camera and enable it to be used in a robot program.
  
* To apply any changes made in this window, the '''"Update"''' button needs to be clicked. The changes will then be uploaded to the camera and also saved in the camera configuration file, so that they are not lost after restart.
+
* '''Enabled''': This enables the camera in CPRog/iRC. If disabled the values from the simulation section will be used. Keep this disabled if you are using an embedded control, otherwise CPRog/iRC might prevent the embedded control from receiving data from the camera.
 +
* '''Image enabled''': If an image is received it will be shown in the camera status section in CPRog/iRC.
 +
* '''Name''': This name identifies the camera in the robot program.
 +
* '''Description''': An optional description, this setting has no effect.
 +
* '''IP address''': IP address of the camera as set up earlier in this article.
 +
* '''Port''': Port number of the camera, by default 50010.
  
===Status===
+
===Coordinate Transformation Settings===
During operation the values that the camera returns (position and model type) can be checked in the in the "variables" tab at the bottom of CPRog.
+
This section defines whether and how coordinates from the camera are transformed.
  
[[file:IFM_plugin_status_cprog.png]]
+
Set the Source coordinate type to "Robot coordinates (mm)" if you calibrated the O2D500 to send robot coordinates. This way the coordinates are not changed by the robot control. Otherwise select "Image coordinates (px)" and follow the next steps to set up the robot control to transform the coordinates. Hint: for testing purposes it may be useful to temporarily select "Robot Coordinates" to see the raw pixel values.
  
==Calibration==
+
[[file:IFM_camera_vectors.png|thumb|right|600px|Measure the parameters off your camera setup]]
'''Note''': Please always set the "Pick Distance" initially, so that there are several cm of space between surface and robot arm. Set the override percentage to a low value, so that the robot moves slowly while testing. Be prepared to stop the program by clicking the "stop" button (or using the emergency stop).
+
These settings are used to calibrate the camera so that the correct positions in 3D space are calculated from the received data. Refer to the image on the right on how to measure these values.
  
'''The values in the image below are only examples.'''
+
We suggest calibrating each value step by step, click Apply to save the changes to the robot and check the change in the camera tab under the 3D view (see chapter "Camera Status"). It should become 'more correct' with each step.
  
 +
* '''Origin''': The position of the camera in the robot coordinate system (x/y/z value).
 +
** Hint: Move your robot's gripper right under the camera and copy the X and Y position of the robot into the origin fields. Measure the Z distance, add it to the robot's position and enter it as the origin Z value.
 +
** Alternatively move your robot to a nearby position (or the 0-position) and measure the X-, Y- and Z- distances from there.
 +
* '''Pick Distance''': Is the distance between the top of the workpiece and the camera.
 +
** Hint: Place the object below the camera and measure the distance
 +
* '''Look Vector''': The viewing direction of the camera. In the image the camera looks down in negative Z direction.
 +
** Hint: If your camera points down enter (0, 0, -1). If it points along a different axis change the vector accordingly.
 +
* '''Up Vector''': This vector defines the rotation of the camera around the target axis, i.e. top and bottom in a 2D image.
 +
** Hint: If the camera is rotated by 90° or 180° along the Z axis you will need to enter one of the following values: (1 0 0), (-1 0 0), (0, 1, 0), (0, -1, 0), note that the Z value is always 0!
 +
** Hint: Try to mount your camera at an angle that is a multiple of 90° to the robot coordinate system, otherwise you will need to calculate these values using sine and cosine.
 +
* '''Scaling''': The camera outputs the object position results in pixels and hands it over to the plugin. To calculate the robot target position the pixel values have to be multiplied with the scaling factor. The scaling factor is dependent on the distance between camera and the surface the camera looks at (see next section).
 +
** Hint: Move your robot next to the object and note the X or Y position of both the robot (information tab) and the object (camera tab).
 +
** Then move the robot by a certain distance (e.g. 5cm) along the X or Y axis (cartesian jog).
 +
** Move the object so that it has the same relative distance to the robot and note the new X or Y position.
 +
** From these values you can calculate the Scaling factor using the rule of three: Divide the actual distance (e.g. 5cm) by the distance of the first and second object position from the camera tab. Enter this value into the Scaling X and Y entries in the configuration area. If the previous value there is not 1 multiply it by your value.
  
[[file:IFM_camera_vectors.png|thumb|center]]
+
'''Calibrating the coordinate axes:'''
 +
Place an object under the camera and move it along either the X or Y axis. Note the start and end positions as shown in the camera status tab and find the major motion direction (X or Y in positive or negative). Then jog the robot in cartesian base mode along the same axis. Does it's direction match that of the object? If not try changing '''Up Vector'''. You can also negate one or both scaling values to invert the axes given by the camera.
  
The followings steps need to be carried out for a successful calibration
+
'''Solving object offsets'''
 +
Depending on the configuration of the camera your coordinates may be offset. To check whether this is the case place an object right under the camera (so that it's center point is in the middle of the picture). The recognized X and Y should be close to those set in '''Origin'''. If they are off by too much calculate the difference between the recognized X and Y and the origin. Then add or subtract that value from the origin.
  
# '''Mount the camera''' and write a motion program that uses the camera (or use the example program provided above).
+
===Simulation Settings===
# Manually '''measure the camera position''' in the robot coordinate system using a ruler and enter the results in the fields "Origin", "Look Vec", "Up Vec" und "Pick Distance".
+
The simulation settings are used when the camera is not enabled in CPRog/iRC. Simulation is not available in the embedded control. These values simulate the information received from the camera before transformation to 3D space, therefore the simulated object position will differ from these values.
#'''Center point''': Test whether the robot grips a work-piece located in the center of the field of view of the camera. It is significantly easier to configure a camera that is mounted parallel or at right angles relative to the robot coordinate system. Odd angles complicate the calibration. Ideally the camera is located parallel to the work surface looking down on it and not looking down at the work surface at e.g. 60 degrees.
+
* '''X''', '''Y''' and '''Z''': Simulated object position in camera image space (XY, usually 0-640 and 0-480) or robot space (XYZ in mm)
#*Place an object directly below the center of the camera, so that the pixel position 320 and 240 is shown in the status section of the plugin window. That is the center of the field of view of the camera.
+
* '''Orientation''': object orientation
#* Jog the robot to above that position. Axis 5 has to be orthogonal to the work surface (parallel to Z). Now adapt the values in the fiel "Origin", so that they reflect the robot position.
+
* '''Model class''': model class or -1 to simulate a failed recognition
#* Leave the work-piece where it is and test a program that uses the camera (you could use the example program provided above). If necessary correct the "Origin" values.
 
# '''Orientation''': Check the orientation of the camera
 
#*Move the work-piece slightly in X direction. Check whether the position shown in the "status" section of the plugin window changes in the right direction. If it does not, the "Up Vec" needs to be updated to reflect the correct position of the camera.
 
  
==Calibration==
+
Click "Save Project" to save the changes in CPRog/iRC and the embedded control (if connected). Test the settings carefully, preferably in simulation first, with a low override. Watch whether the robot collides with the ground surface or object.
'''Note''': Please always set the "Pick Distance" initially, so that there are several cm of space between surface and robot arm. Set the override percentage to a low value, so that the robot moves slowly while testing. Be prepared to stop the program by clicking the "stop" button (or using the emergency stop).
+
<br clear=all>
  
The followings steps need to be carried out for a successful calibration
+
==Camera Status==
 +
[[File:IRC_CameraStatus.PNG|thumb|500px|The status area shows the camera status and object information]]
  
# '''Mount the camera''' and write a motion program that uses the camera (or use the example program provided above).
+
Once configured the status area in CPRog/iRC will show the status of the camera and the position and model class of the recognized object. If the camera image is enabled it will be shown there as well.
# Manually '''measure the camera position''' in the robot coordinate system using a ruler and enter the results in the fields "Origin", "Look Vec", "Up Vec" und "Pick Distance".
+
<br clear=all>
#'''Center point''': Test whether the robot grips a work-piece located in the center of the field of view of the camera. It is significantly easier to configure a camera that is mounted parallel or at right angles relative to the robot coordinate system. Odd angles complicate the calibration. Ideally the camera is located parallel to the work surface looking down on it and not looking down at the work surface at e.g. 60 degrees.
 
#*Place an object directly below the center of the camera, so that the pixel position x=0 and y=0 is shown in the status section of the plugin window. That is the center of the field of view of the camera.
 
#* Jog the robot to above that position. Axis 5 has to be orthogonal to the work surface (parallel to Z). Now adapt the values in the fiel "Origin", so that they reflect the robot position.
 
#* Leave the work-piece where it is and test a program that uses the camera (you could use the example program provided above). If necessary correct the "Origin" values.
 
# '''Orientation''': Check the orientation of the camera
 
#*Move the work-piece slightly in X direction. Check whether the Position shown in the "status" section of the plugin window changes in the right direction. If it does not, the "Up Vec" needs to be updated.
 
# If you do not have an embedded linux computer as part of your control electronics, you can now reset and enable the robot and try out the program in the real world. You may have to adapt the scaling factor, if you find that the robot does find the center point but does not move far enough in X or Y direction, when the recognized object is not in the center.
 
# If you do have an embedded Linux computer, '''take a screenshot of the settings of the plugin configuration dialogue or write them down.''' You will need them, if you have an embedded Linux computer as part of your control electronics.
 
  
==TinyCtrl==
+
==Using the camera in a robot program==
The embedded linux computer, which is part of the robolink DCi robot or can be purchased as an option for the DIN rail control electronics, uses a program called TinyCtrl. TinyCtrl is the Linux equivalent to CPRog. The embedded computer is used so that a robot can be run without an external Windows PC.
+
The robot control provides 2 commands for accessing the camera: "Camera" and "Trigger Camera". The camera command copies the last received object data to a previously defined position variable and the model class to a number variable. If the camera is set up to continuously send data this is all you need. If the camera is set to be triggered by the process interface you can use the "Trigger Camera" command to request new data from the camera. This is useful to collect data at a certain moment.
  
===Configure new IP address===
+
[[file:Example_IFM_Camera.png|thumb|600px|Example Program]]
When an embedded computer is used, the camera needs to be connected to the embedded computer, after it has been configured as described above. The first step to connecting the camera to the embedded system is to configure the correct IP address (192.168.'''4'''.49):
 
# Close CPRog
 
# Start the IFM Object Recognition software and connect to your sensor, which should still be connected to your Windows PC (in this tutorial that is IP 192.168.0.49).
 
# Go to "Applications" and select the tab called "Network Parameters" in the lower part of the window.
 
#* Configure '''IP address''' to <code>192.168.4.49</code>
 
#* Configure '''Subnet Mask''' to <code>255.255.255.0</code>
 
#* Configure '''Gateway''' to <code>192.168.4.201</code>
 
#* Click "Assign". The camera will now loose connection to your windows PC.
 
#* Connect the green LAN cable between Camera and Embedded Linux Computer. The correct port to use is the lower (ETH1) port of the DIN Rail embedded computer or the left LAN port of the robolink DCi embedded computer.
 
#* ''(The IP configured here will later on be referenced in the plugin configuration file. The second network adapter (ETH1 of the Linux embedded system) is by default configured for 192.168.4.11. So do not use that particular IP here))''
 
  
===Install and update the configuration file===
+
The screenshot shows how the camera command is used in practice.
Now connect to your Linux embedded computer via LAN.
+
# Declare two variables using the store command: A position variable for the object position and a number variable for the model class.
 +
# Use the Camera command to copy object information into the variables. This command does not wait, if no new information is available it will return the previous values again. If no object is recognized the model class will be -1, the position value must not be used in that case.
 +
# Use an If statement to check whether the model class is greater than -1.
 +
# In that case you may proceed to use the object position. In CPRog/iRC and TinyCtrl earlier than V12 only the cartesian object position is available, you can not use joint commands to move to the object. Joint commands in V12 and later may be faster than linear commands.
  
*Use any of the methods outlined [[FTP_and_putty_Access|in this link]] to edit the configuration file located in <code>/home/root/TinyCtrl/Data/Plugins/ConfigSmartCameraIFMO2D.xml</code>. The FTP method is usually easiest for someone with no prior experience in Linux terminal.
+
[[File:Example_IFM_Camera_NoOrientation.PNG|thumb|600px|Overwrite the object orientation if the robot moves slowly]]
*Find this line in the configuration file:
+
'''Attention 1:''' Your robot might move to the object '''slower than usual''', especially if your robot has no orientation axis (e.g. gantries and deltas). To fix this you can overwrite the orientation values of the target position variable. The approach shown in the screenshot should work for all robot types.
**<code><Cam0Geometry MaxWaitTime="5" ScaleX = "0.25" ScaleY="0.25" OriginX="120" OriginY="120" OriginZ="335" LookX="0" LookY="0" LookZ="-1" UpX="0" UpY="1" UpZ="0" ZDistance="300" SimX="350" SimY="-100" SimA="350" SimModelClass="4"/>"</code>
 
**Update the OrgingX, OriginY, OriginZ, LookX, LookY, LookZ, UpX, UpY, UpZ values and the ZDistance values with the values that you have noted down or made a screenshot of in the calibration step [[file:IFM_camera_config.png|thumb]].
 
  
*After editing and saving that file (ConfigSmartCameraIFMO2D.xml) into the directory <code>/home/root/TinyCtrl/Data/Plugins/</code> on the Linux computer.
+
'''Attention 2:''' In case of robot arms: Linear commands may not be able to move to your object position from each position. Try the following if an '''interpolation error''' occurs:
*Power cycle the robot.
+
* Move to a defined position close to the camera zone before moving to the object.
*Start CPRog and upload the unzipped [[media:Example_IFM_Camera.zip|example program]] that you have used earlier in this tutorial.
+
* Overwrite the object orientation as described in Attention 1.
*Check, if the robot moves to the correct position when the coin is placed in the center below the camera.
+
* Use the joint-by-variable command (supported in V12 and newer). This should work as long as the object position is reachable.
* Check, if the robot moves too far or not far enough, when placing the coin outside the center in X and Y direction.
+
* Check manually whether the target positions are reachable: Read the object position from the cameras tab in CPRog/iRC, then use the jog buttons to move to that position.
** You will likely need to change the scaling factors ('''ScaleX''' and '''ScaleY''') in the config file on the Linux computer (ConfigSmartCameraIFMO2D.xml) to make the robot move the right distance, when the work piece is located off-center. The scaling factor is essentially the factor for the conversion between pixels and mm.
 
  
 +
[[media:CameraExample.zip|You can download the example program here.]]
  
[[Category:CPRog]][[Category:CPRog Plugins]]
+
[[Category:Configuration]][[Category:CPRog]][[Category:Robot Programming]][[Category:TinyCtrl]]

Latest revision as of 15:30, 27 June 2024

The robot control supports object recognition and video cameras. Object recognition cameras are used to detect object types and positions relative to the robot while video cameras provide images for observing the robot.

Currently the following cameras are supported:

This article explains how to set up the O2D200 and O2D500 cameras. There also is a video (in German) that shows the entire process for the ifm O2D200.

Safety

  • Caution.png Caution! Personal safety has to be ensured during operation.
  • Caution.pngThis is especially relevant during configuration and set up of the camera application. All motion has to be carried out at slow speeds.
  • Caution.pngThe operator has to be ready to stop the robot
  • Caution.pngIt is recommended that all programs are tested in the simulation prior to moving the robot.

Mechanical and electrical setup

Preferred camera mounting position

Please refer to the camera's documentation on how to integrate it.

  • Be careful to avoid collisions by the robot.
  • Consider the acceptance cone and minimum distance of the camera.
  • The camera should be mounted parallel to the coordinate axes of the robot if possible. - That means either overhead, in front or from the side at an angle of 0°, 90° or 180°. This will simplify calibration significantly (see image on the right).

Camera configuration

The ifm O2D200 and O2D500 cameras use different tools for configuring the camera. The following sections only show a step by step summary, please read the camera's documentation for further info.

ifm O2D200

IFM Camera TCP/IP setings

The image processing is done entirely in camera. To recognize a workpiece an "application" has to be set up containing the model of the workpiece. The camera is configured using the IFM Software efector dualis E2D200.

Communication with the robot control requires the following settings:

  • TCP/IP settings:
    • Without embedded control: Keep the factory defaul IP address or use one of your choice in the network of your computer.
    • With Raspberry Pi-based embedded control: 192.168.3.49 or a different address in that same network but not 192.168.3.11. Since this ethernet port is used for connecting the computer you may need to use a network switch so you can connect both.
    • With Phytec-based embedded control:
      • Ethernet port 2 (recommended): 192.168.4.49 or a different address in that same network but not 192.168.4.11.
      • Ethernet port 1: See Raspberry Pi-based control.

After changing the IP address you may need to update the IP address of your computer as described in section PC Network Configuration otherwise you may not be able to connect to the camera.

  • An example camera configuration can be downloaded here - it is set up to recognize a 2 Euro coin in our lighting conditions and camera set up. (Your mileage may vary. This file is provided to make sure that parameters, such as IP and result output are correct.) To get it to actually recognize a coin, read the IFM documentation. The file has to be unzipped before loading it by right clicking on an unused folder in the "Applications" dialogue and selecting "Download to Sensor" followed by right clicking on the same folder and selecting "Activate". Not that the IP configuration of the camera is also updated (to the defaultsettings) when doing that (192.168.0.49).


IFM Protocol Version settings
  • The Protocol Version must be V2 and the output format ASCII.


IFM IO configuration
  • Select you newly created project and click on the edit button on the LEFT.
  • Then click on "Process Interface" and "Change Settings". The window "IO configuration" appears.
  • In the "TCP/IP settings" set
    • "Result Output" to On
    • "Model Detail Output" to On
    • "Start string" to "start"
    • "Stop string" to "stop"
    • "Separator" tp "#"
    • "Image output" may be On or Off. Set this to "On" if you want to be able to see the camera's image in CPRog/iRC, otherwise "Off" is recommended
    • "Image format" to "Windows bitmap"


Make sure that the camera recognizes your object reliably enough before proceeding to the next step: Setting up the Robot Control.

ifm O2D500

The ifm O2D500 provides different operating modes, two of which are supported by our robot control:

  • Contour presence control
  • Advanced application

By default contour presence control does not send object IDs. This may be relevant if your robot needs to distinguish objects. An alternative communication preset can be loaded to include the object IDs.

Make sure you are using at least firmware version 1.27.9941 (see the title bar of the ifm configuration tool after connecting to the camera).

Protocol

Make sure the process interface protocol is set to V3: Open Device Setup -> Interfaces and check the setting of "Process interface version".

Creating a Coutour Presence Control Application

The fastest way to set up a simple image recognition application is via the contour presence control assistant. Simply follow the steps as shown in the screenshots and enter the following settings:

  • in step 2: Set trigger to continuous or process interface (request images via the "Trigger Camera" robot program command).
  • in step 7: Hint: consider setting "Orientation" to -180 - 180 so that the object is recognized in any orientation.
  • in step 8: Set output interface to Ethernet
  • in step 9: Enable "Model result" and "ROI results", "Object results" is optional. Set "Start" to either "star" or "start", "Delimiter" to "#" and "End" to "stop"

Coordinate Types

By default the camera sends image coordinates (pixel positions) which then need to be transformed by the robot control, see section Setting up the Robot Control on how to do the calibration. Alternatively the camera can be calibrated to do the coordinate transformation and send positions relatively to the robot in mm:

  1. Select your camera application and click "Edit application" below the "Application details" at the right side.
  2. If asked choose to change to advanced configuration (you can't use the assistant for this application afterwards).
  3. Open the "Images & trigger" configuration, then find "Calibration" button in the "Trigger & general" section.
  4. Follow the "Robot sensor calibration" wizard. Please refer to the camera's documentation if any issues occur.

The camera by default still sends the pixel position. To send the cartesian position you need to set the "Robot Coordinates" interface preset as explained in the next section.

Model ID and Interface Preset

By default the contour presence control mode does not send model IDs. If you need to distinguish objects change to an advanced application as described in the previous section. Then follow these steps:

  1. Open the Interfaces configuration of your camera application
  2. In the TCP/IP section check if the preset for igus robots is available. File:Ifm O2D500 Presets for iRC.zip.
  3. Select the preset for image coordinates unless you calibrated for robot coordinates and save the application.

Creating an Advanced Application

Creating an advanced application may be the faster approach if you know what you're doing or if you want to define multiple objects.

  1. Create a new application and select advanced application. There is no wizard to follow.
  2. Calibrate for robot coordinates as explained in the section "Coordinate Types" above.
  3. Load the interface preset as explained in the section "Model ID and Interface Preset" above.
  4. Define your models
  5. Save the application


Setting up the Robot Control

Configuration in CPRog/iRC

Now that the camera is ready to deliver object model and position information the robot control needs to be set up to connect to the camera and calculate the object/pick positions in 3D space.

  • Start CPRog/iRC and connect it to the robot if you are using one with embedded control.
  • Open the camera configuration: File -> Configure Interfaces -> Cameras
  • Add a new camera of type "IFM O2D".

The configuration area should look similar to the screenshot on the right.

Please note: If you change camera settings, e.g. your camera application, the robot may continue receiving values by the old configuration until reconnected. To do this simply click "Apply" in the interface configuration of CPRog/iRC.

General Settings

These settings refer to the connection to the camera and enable it to be used in a robot program.

  • Enabled: This enables the camera in CPRog/iRC. If disabled the values from the simulation section will be used. Keep this disabled if you are using an embedded control, otherwise CPRog/iRC might prevent the embedded control from receiving data from the camera.
  • Image enabled: If an image is received it will be shown in the camera status section in CPRog/iRC.
  • Name: This name identifies the camera in the robot program.
  • Description: An optional description, this setting has no effect.
  • IP address: IP address of the camera as set up earlier in this article.
  • Port: Port number of the camera, by default 50010.

Coordinate Transformation Settings

This section defines whether and how coordinates from the camera are transformed.

Set the Source coordinate type to "Robot coordinates (mm)" if you calibrated the O2D500 to send robot coordinates. This way the coordinates are not changed by the robot control. Otherwise select "Image coordinates (px)" and follow the next steps to set up the robot control to transform the coordinates. Hint: for testing purposes it may be useful to temporarily select "Robot Coordinates" to see the raw pixel values.

Measure the parameters off your camera setup

These settings are used to calibrate the camera so that the correct positions in 3D space are calculated from the received data. Refer to the image on the right on how to measure these values.

We suggest calibrating each value step by step, click Apply to save the changes to the robot and check the change in the camera tab under the 3D view (see chapter "Camera Status"). It should become 'more correct' with each step.

  • Origin: The position of the camera in the robot coordinate system (x/y/z value).
    • Hint: Move your robot's gripper right under the camera and copy the X and Y position of the robot into the origin fields. Measure the Z distance, add it to the robot's position and enter it as the origin Z value.
    • Alternatively move your robot to a nearby position (or the 0-position) and measure the X-, Y- and Z- distances from there.
  • Pick Distance: Is the distance between the top of the workpiece and the camera.
    • Hint: Place the object below the camera and measure the distance
  • Look Vector: The viewing direction of the camera. In the image the camera looks down in negative Z direction.
    • Hint: If your camera points down enter (0, 0, -1). If it points along a different axis change the vector accordingly.
  • Up Vector: This vector defines the rotation of the camera around the target axis, i.e. top and bottom in a 2D image.
    • Hint: If the camera is rotated by 90° or 180° along the Z axis you will need to enter one of the following values: (1 0 0), (-1 0 0), (0, 1, 0), (0, -1, 0), note that the Z value is always 0!
    • Hint: Try to mount your camera at an angle that is a multiple of 90° to the robot coordinate system, otherwise you will need to calculate these values using sine and cosine.
  • Scaling: The camera outputs the object position results in pixels and hands it over to the plugin. To calculate the robot target position the pixel values have to be multiplied with the scaling factor. The scaling factor is dependent on the distance between camera and the surface the camera looks at (see next section).
    • Hint: Move your robot next to the object and note the X or Y position of both the robot (information tab) and the object (camera tab).
    • Then move the robot by a certain distance (e.g. 5cm) along the X or Y axis (cartesian jog).
    • Move the object so that it has the same relative distance to the robot and note the new X or Y position.
    • From these values you can calculate the Scaling factor using the rule of three: Divide the actual distance (e.g. 5cm) by the distance of the first and second object position from the camera tab. Enter this value into the Scaling X and Y entries in the configuration area. If the previous value there is not 1 multiply it by your value.

Calibrating the coordinate axes: Place an object under the camera and move it along either the X or Y axis. Note the start and end positions as shown in the camera status tab and find the major motion direction (X or Y in positive or negative). Then jog the robot in cartesian base mode along the same axis. Does it's direction match that of the object? If not try changing Up Vector. You can also negate one or both scaling values to invert the axes given by the camera.

Solving object offsets Depending on the configuration of the camera your coordinates may be offset. To check whether this is the case place an object right under the camera (so that it's center point is in the middle of the picture). The recognized X and Y should be close to those set in Origin. If they are off by too much calculate the difference between the recognized X and Y and the origin. Then add or subtract that value from the origin.

Simulation Settings

The simulation settings are used when the camera is not enabled in CPRog/iRC. Simulation is not available in the embedded control. These values simulate the information received from the camera before transformation to 3D space, therefore the simulated object position will differ from these values.

  • X, Y and Z: Simulated object position in camera image space (XY, usually 0-640 and 0-480) or robot space (XYZ in mm)
  • Orientation: object orientation
  • Model class: model class or -1 to simulate a failed recognition

Click "Save Project" to save the changes in CPRog/iRC and the embedded control (if connected). Test the settings carefully, preferably in simulation first, with a low override. Watch whether the robot collides with the ground surface or object.

Camera Status

The status area shows the camera status and object information

Once configured the status area in CPRog/iRC will show the status of the camera and the position and model class of the recognized object. If the camera image is enabled it will be shown there as well.

Using the camera in a robot program

The robot control provides 2 commands for accessing the camera: "Camera" and "Trigger Camera". The camera command copies the last received object data to a previously defined position variable and the model class to a number variable. If the camera is set up to continuously send data this is all you need. If the camera is set to be triggered by the process interface you can use the "Trigger Camera" command to request new data from the camera. This is useful to collect data at a certain moment.

Example Program

The screenshot shows how the camera command is used in practice.

  1. Declare two variables using the store command: A position variable for the object position and a number variable for the model class.
  2. Use the Camera command to copy object information into the variables. This command does not wait, if no new information is available it will return the previous values again. If no object is recognized the model class will be -1, the position value must not be used in that case.
  3. Use an If statement to check whether the model class is greater than -1.
  4. In that case you may proceed to use the object position. In CPRog/iRC and TinyCtrl earlier than V12 only the cartesian object position is available, you can not use joint commands to move to the object. Joint commands in V12 and later may be faster than linear commands.
Overwrite the object orientation if the robot moves slowly

Attention 1: Your robot might move to the object slower than usual, especially if your robot has no orientation axis (e.g. gantries and deltas). To fix this you can overwrite the orientation values of the target position variable. The approach shown in the screenshot should work for all robot types.

Attention 2: In case of robot arms: Linear commands may not be able to move to your object position from each position. Try the following if an interpolation error occurs:

  • Move to a defined position close to the camera zone before moving to the object.
  • Overwrite the object orientation as described in Attention 1.
  • Use the joint-by-variable command (supported in V12 and newer). This should work as long as the object position is reachable.
  • Check manually whether the target positions are reachable: Read the object position from the cameras tab in CPRog/iRC, then use the jog buttons to move to that position.

You can download the example program here.