WARNING! This tutorial assumes that you have an advanced knowledge of: git, linux, ROS1, YARP, Python, KUKA arm, FRI, Robot Kinematics
This tutorial was created by Daniel GarcĂa Vaglio. If you have any questions about this tutorial or about the calibration process ask him.
README
for installation instructionskeyboard-cart-control
repository in ~/local/src
TODO: Add an explanation of the process and why we do things the way we do them
bridge
and vfclik
are running and talking to the robot using FRI.~/local/src/keyboard_cart_control
, and execute the following command:python keyboard_cart_control.py -n /arcosbot-real
realsense-viewer
i
to store this first pose.y
to activate automatic pose storage.u
to see how many poses have been stored. We want around 2500 poses.o
to save the poses to a csv
file.keyboard_cart_control.py
mv <poses-filename>.csv <catkin_workspace>/src/oms-cylinder/camera2png/data
realsense-viewer
realsense-viewer
cd <catkin_workspace>/src/oms-cylinder/oms_launcher/launch/ roslaunch rs_camera.launch
cd <catkin_workspace>/src/oms-cylinder/camera2png/data mkdri outpout cd <catkin_workspace>/src/oms-cylinder/camera2png/scripts/ python camera2png.py ../data/example.py ../data/<poses-file-name>.csv ../data/output
Sometimes the KUKA robot stops working (we are not sure why). When that happens, you will have to:
camera2png
script<catkin_workspace>/src/oms-cylinder/camera2png/data
which was the last generated photo. Lets say it was N
.camera2png
script like this:python camera2png.py ../data/example.py ../data/<poses-file-name>.csv ../data/output --start <N+1>
vector(x, y, z)
and orientation quaternion(x, y, z, w)
. This configuration has to be stored in YAML format in the following file: <catkin_ws>/src/oms-cylinder/oms_launcher/cal/configs/static_camera_guesses.yml
. Here you can find an example (Do NOT use this as your guess).base_to_camera_guess: x: 1.04 y: 1.1 z: 0.0 qx: 0.3 qy: 0.4 qz: 0.5 qw: 0.01 wrist_to_target_guess: x: 0.109 y: 0.2 z: -0.05 qx: -0.-4 qy: 0.5 qz: 0.5 qw: -0.4
intrinsics: fx: 1387.6160888671875 fy: 1387.5479736328125 cx: 943.9945678710938 cy: 561.1880493164062
target_definition: rows: 10 cols: 10 spacing: 0.01861
mv <catkin_workspace>/src/oms-cylinder/camera2png/data <catkin_ws>/src/oms-cylinder/oms_launcher/cal/configs cd <catkin_ws>/src/oms-cylinder/oms_launcher/cal/configs/data mkdir images mkdir poses mv *.yml poses mv *.png images
configs/data
directory.# TODO: Fix this exxample - pose: "000000.yml", image: "000000.png" - pose: "000001.yml", image: "000001.png" . . .
roslaunch oms-cylinder/oms_launcher/launch/static_calibration.launch
ENTER
for each image. If your data set is big, only one ENTER
is required (we have no idea what is the threshold, but is seems to be around 2000 poses).ENTER
and the program doesn't respond, there are some images that take longer to analyze than others. When you reach the last image, the iamge-window-interface will gt stuck, Do Not press ENTER
. Wait until the algorithm converges.Did converge?: 1 Initial cost?: 47853.2 (pixels per dot) Final cost?: 0.655122 (pixels per dot) BASE TO CAMERA: -0.208397 -0.664739 0.717421 -0.0773392 -0.967382 0.0320803 -0.251281 0.220998 0.144021 -0.746387 -0.649742 1.43606 0 0 0 1 --- URDF Format Base to Camera --- xyz="-0.0773392 0.220998 1.43606" rpy="0.854511(48.9599 deg) -2.99707(-171.719 deg) 1.35862(77.8429 deg)" qxyzw="-0.593562 0.687426 -0.362827 0.208531" BASE TO CAMERA: -0.0184899 0.999697 -0.0162778 -0.0817589 -0.00618554 0.0161659 0.99985 0.139854 0.99981 0.0185878 0.00588475 -0.00619646 0 0 0 1 --- URDF Format Base to Camera --- xyz="-0.0817589 0.139854 -0.00619646" rpy="-1.87741(-107.567 deg) -1.59029(-91.1172 deg) 0.322833(18.497 deg)" qxyzw="-0.48976 -0.507142 -0.502048 0.500889"