Intelligent Control and Automation
Vol.07 No.03(2016), Article ID:68420,11 pages
10.4236/ica.2016.73008

Motion Planning System for Bin Picking Using 3-D Point Cloud

Masatoshi Hikizu, Shu Mikami, Hiroaki Seki

School of Mechanical Engineering, Kanazawa University, Kanazawa, Ishikawa, Japan

Copyright © 2016 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 23 May 2016; accepted 12 July 2016; published 15 July 2016

ABSTRACT

In this paper, we propose a motion planning system for bin picking using 3-D point cloud. The situation that the objects are put miscellaneously like the inside in a house is assumed. In the home, the equipment which makes an object stand in line doesn’t exist. Therefore the motion planning system which considered a collision problem becomes important. In this paper, Information on the objects is measured by a laser range finder (LRF). The information is used as 3-D point cloud, and the objects are recognized by model-base. We propose search method of a grasping point for two-fingered robotic hand, and propose search method of a path to approach the grasping point without colliding with other objects.

Keywords:

3-D Point Cloud, Bin Picking, ICP Algorithm, Motion Planning

1. Introduction

The bin picking operation by a robot is one of the most typical tasks in the industrial line and the home. However, the equipment which makes the objects stand in line exists in the industrial line, but it doesn’t exist in the home. The objects are often put miscellaneously in the home. Therefore robot has to recognize the location of the object and grasp the object selectively. It’s necessary to consider a collision problem with other objects at this time. Recognition of the environment of 3-D is needed to recognize the object of the target from the objects put miscellaneously.

LRF [1] is used as the sensor which measures the environment of 3-D, and recognizes the object by model- base [2] - [4] . The point cloud is captured by LRF. There are a lot of research as which an object is recognized using point cloud such as [5] - [7] . As the method to add information to the each point, Johnson and Kang included extra information such as colors in point cloud [8] . Akca included extra information such as intensity values in point cloud [9] . As the method to add information to the key-point of point cloud, Tombari et al. proposed signature of histograms of orientation [10] [11] . Rusu et al. proposed point feature histograms [12] . Sun et al. proposed point fingerprint which use geodesic circles around the reference point as description [13] . As the method to describe a relation during several feature points, Drost et al. proposed the method to make recognition efficient by the hash table [14] .

As for the research on bin picking, [15] , [16] assumed the shape primitives to an unknown object. Morales et al. proposed the 2-D segmentation method to identify the object [17] . There are a lot of researches to grasp planning problem such as [18] - [22] . Sanz et al. assumed the 2-D model and planned the grasping point on the object by using the ellipsoid [23] . Fuchs et al. proposed the bin picking method of cylindrical objects [24] .

In this paper, we recognize the position and posture of the object using ICP algorithm [24] - [27] . The grasping point on the recognized object is searched in the operating range of the two fingered robotic manipulator, and an approach path to grasping point is searched. The sub-targets are used to evade the collision problem at this time. Several sub-targets are defined, and avoidance of the collision problem is made efficient by searching path via the sub-targets. The pick-and-place motion seems to become safer and certain using this system.

2. The Outline of System

2.1. Introduction of System

In this paper, we assumed bin picking operation using the two fingered robotic hand in the home, and target objects are placed on the flat face. 3-D information on those object is acquired using the LRF sensor. The LRF sensor is installed in the robotic manipulator such as Figure 1. ΣA is the coordinates of the robotic manipulator, and ΣB is the coordinates of the LRF sensor. The position and the posture of the objects are recognized as the 3-D information using a known model. The approach path to the recognized object is searched.

This system has the following 4 phases. Every time the object is taken, these are repeated. A flow diagram of this system is shown on Figure 2.

PHASE 1: 3-D measurement of the objects.

PHASE 2: Recognizing of the position and the posture of the object.

PHASE 3: Searching of the grasping point on the recognized object.

PHASE 4: Searching of the approach path to the grasping point on the recognized object.

2.2. 3-D Measurement of the Objects (PHASE 1)

LRF is being installed on the wrist of the 6-DOF robotic manipulator such as Figure 1. The target objects are measured while changing a viewpoint freely using the 6-DOF robotic manipulator. This measurement is performed from the various directions to reduce an occlusion problem [28] . This scanning operation can generate information on the 3-D environment which consists of 3-D point clouds such as Figure 3.

2.3. Recognizing of the Position and the Posture of the Object (PHASE 2)

In this phase, the posture and the position of the object is presumed using the 3-D point clouds measured in PHASE 1. This system is premising that the size of the object and the shape are known such as Figure 4. A near point is searched based on ICP algorithm between the model point group and the measurement point group of

Figure 1. A scanning process using the LRF sensor.

Figure 2. The flow diagram of the system.

(a) (b) (c)

Figure 3. An example of 3-D point cloud. (a) An environment; (b) point clouds; (c) without floor.

(a) (b)

Figure 4. An example of the object model. (a) A square pillar; (b) a model of square pillar.

the object prepared beforehand. Equation (1) is the evaluation function.

(1)

Here, p is the position vector of the point clouds measured by LRF. q is the position vector of the point clouds prepared for a model. N is the number of the point group used by a model. Δe is the distance between the corresponding points. m is the number of times of the repeat calculation. ki is the number which corresponds to a model point. In this paper, to correspond to an occlusion problem, every time the posture of the model is changed, the number of the used model points N is changed such as Figure 5. Because the number of the points N seen from LRF changes with a change in the position of the model and posture. It’s difficult that ICP algorithm corresponds to this data which doesn't have many overlap points [29] . A used point is judged by the following Equation (2).

(2)

: Point is unused.

: Point is used.

Here, is vector from view point to the model point. is face normal vector of the model point.

2.4. Searching of the Grasping Point on the Recognized Object (PHASE 3)

In this phase, the grasping point on the recognized object is searched for the two fingered robotic hand. The model when used by recognition have the axis as information for grasping such as Figure 6. The search start

(a) (b)

Figure 5. The principle of occlusion estimation.

Figure 6. The axis for grasping (cylinder model).

position of robotic hand is decided using this axis for grasping such as Figure 7. Search of the grasping point is simplified by using this axis. X, Y, and Z are direction vector of each coordinate axis of the axis for grasping. XE, YE, and ZE are direction vector of each coordinate axis of the robotic hand. First step: The direction of XE is made parallel to Z. Second step: A point on the closest circumference is searched. Third Step: ZE and C is made identical. C is direction vector to the axis for grasping from the closest point. The grasping point is decided by using 5 patterns of search direction such as Figure 8. First pattern is horizontal direction horizontal direction to the axis for grasping such as Figure 8(a). Second pattern is the pitch direction to the axis for grasping such as Figure 8(b). Third pattern is the roll direction to the axis for grasping such as Figure 8(c). Forth pattern is the yaw direction at the termination of the axis for grasping such as Figure 8(d). Fifth pattern is the pitch direction at the termination of the axis for grasping such as Figure 8(e). A closest point to the robot is chosen as the grasping point in the operating range of the robot finally.

It’s necessary to consider collision problem with the objects besides the target object. Collision problem can’t be avoided by the situation that a lot of objects are put miscellaneously. Interference model of the robotic manipulator is used to collision problem such as Figure 9. An example of judgment of interference is shown on Figure 10. A finger of the robotic hand and a lower object are interfering on Figure 10(a). The result which evaded interference is shown on Figure 10(b).

(a) (b) (c)

Figure 7. Decision procedure of the posture of the robotic hand. (a) XE is made parallel to Z; (b) searching a point on the closest circumference; (c) ZE and C is made identical.

(a) (b) (c)(d) (e)

Figure 8. The standard search pattern of the gripping point. (a) Horizontal; (b) pitch; (c) roll; (d) yaw at termination; (e) pitch at termination.

(a) (b)

Figure 9. The standard search pattern of the gripping point. (a) Interference model; (b) interference with the point cloud.

(a) (b)

Figure 10. An example of judgment of interference. (a) With interference; (b) without interference.

2.5. Searching of the Approach Path to the Grasping Point on the Recognized Object (PHASE 4)

In this phase, the approach path from the waiting position of the robotic manipulator to the grasping point is searched. The approach path is searched basically by inverse kinematics. Some approach path estimates the movement cost from the waiting posture using the Equation (3).

(3)

S is the total of the fluctuation amount of each joint from the waiting position. This is estimated as the cost of each approach path. Δi is the change amount of the angle of each joint from the waiting position. The minimum S is selected as the approach path in the operating range.

Search of the approach path has to consider collision problem like search of the grasping point. In this paper, the sub-targets are used to evade this problem. Several sub-targets are prepared beforehand by user. Collision problem is evaded by searching approach path via the sub-target such as Figure 11.

3. Experiment of Bin Picking to the Stacked Objects

The Planned motion was inspected by the simple experimental environment to confirm our proposed method. The experiment of bin picking in the environment that three objects exist is shown on Figure 12. Those objects are stacked such as Figure 12(a).When each object has contact, a segmentation of each object becomes difficult. The shape of the target object is a cylinder. The axis for grasping is set as the center of the cylinder such as Figure 6. Therefore all 5 patterns of the basis (Figure 8) is used in search of the grasping point. In Figure 12(c), the results which were measured from 4 direction (front, up side, left side, right side) was integrated. The measuring result have about 9000 points.

3.1. The Recognition Result of the Objects

A model of the cylinder used for recognition is shown on Figure 13. The threshold of e(m) is set as 10 mmequal to the official precision of LRF. 36 of convergent value was obtained. 11 results became lower than the threshold. 6

(a) (b)

Figure 11. An example via the sub-target. (a) Without sub-target; (b) with sub-target.

(a) (b) (c)

Figure 12. The experimental environment and measuring result. (a) The experimental environment; (b) target objects; (c) measuring result.

Figure 13. The model of cylinder.

results in 11 results converged on an upper object of the stacked objects such as Figure 14(a). It’s because the number of the data which shows the upper object is large, and the shape of the upper object can be measured more correctly. 5 remaining results converged on each lower objects of the stacked objects such as Figure 14(b). All except for those results will be miss detection such as Figure 14(c). Those results are the local minimum. The result of recognition of Figure 14(a) is shown on Table 1. The upper object is chosen as the first target, because we’d like to take the object from upper part of the stacked objects in turn by pick-and-place motion.

3.2. The Motion Planning of the Robot and the Execution Result

A motion plan is performed to the first target. The search result of the grasping point is shown on Figure 15(b). The search result of the approach path is shown on Figure 15(c). The execution result by the actual robotic manipulator is shown on Figure 15(d). We succeeded to pick up the upper object from the stacked objects without colliding with the lower objects.

After the first target was taken up, the motion plan to the remaining object is performed. It’s necessary to re-measure to get the partial information which is hidden by the first target. The partial information is integrated and it’s re-recognized. An object near the robot is chosen from the remaining objects as the next target. The experimental result for the second target is shown on Figure 16. The experimental result for the third target is shown on Figure 17. The result of re-recognition of the second target and the third target is shown on Table 2. We succeeded to take each object from the stacked objects in turn. From this result, we consider it’s applicable in the environment that many objects exist, and this motion planning method is effective to the pick-and-place motion.

(a) (b) (c)

Figure 14. An example of the recognition result. (a) Matched (e = 7.2 mm); (b) matched (e = 9.6 mm); (c) unmatched (e = 12.3 mm)

(a) (b)(c) (d)

Figure 15. The result for the first target. (a) The recognition result; (b) the search result of the grasping point; (c) the search result of approach path; (d) the result by the actual robotic manipulator.

(a) (b)(c) (d)

Figure 16. The result for the second target. (a) The recognition result (e = 8.3 mm); (b) the search result of the grasping point; (c) the search result of approach path; (d) the result by the actual robotic manipulator.

(a) (b)(c) (d)

Figure 17. The result for the third target. (a) The recognition result (e = 7.8 mm); (b) the search result of the grasping point; (c) the search result of approach path; (d) the result by the actual robotic manipulator.

Table 1. The recognition result of the position and the posture [(a) in Figure 13].

Table 2. The recognition result of the position and the posture [Figure 15 and Figure 16].

4. Conclusion

This paper proposed the method of the motion planning for pick-and-place motion of robotic manipulator in the environment that a lot of objects are put miscellaneously. The 3-D information measured by LRF is used for the motion planning as the 3-D point cloud. The objects were recognized using ICP algorithm. We use the axis which gave it to the model for search of the grasping point. Search of the grasping point is simplified by using this axis. Collision problem can’t be avoided by the situation that a lot of objects are put miscellaneously. But the collision problem was evaded by using interference model and the sub-targets. The motion planning was performed using these methods. We succeeded in pick-and-place motion by the actual robotic manipulator in the environment that some objects were stacked. We are considering distinction of the object with the various shapes, countermeasure for improvement of the recognition efficiency, and improvement of the precision of the system.

Cite this paper

Masatoshi Hikizu,Shu Mikami,Hiroaki Seki, (2016) Motion Planning System for Bin Picking Using 3-D Point Cloud. Intelligent Control and Automation,07,73-83. doi: 10.4236/ica.2016.73008

References

  1. 1. Kawata, H., Ohya, A., Yuta, S., Santosh, W. and Mori, T. (2005) Development of Ultra-Small Lightweight Optical Range Sensor System. Proceedings of IEEE International Conference on Intelligent Robots and Systems, 2-6 August 2005, 1078-1083.
    http://dx.doi.org/10.1109/iros.2005.1545476

  2. 2. Johnson, A.E. and Hebert, M. (1999) Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21, 433-449.
    http://dx.doi.org/10.1109/34.765655

  3. 3. Mian, A.S., Bennamoun, M. and Owens, R.A. (2006) A Novel Representation and Feature Matching Algorithm for Automatic Pairwise Registration of Range Images. International Journal of Computer Vision, 66, 19-40.
    http://dx.doi.org/10.1007/s11263-005-3221-0

  4. 4. Hetzel, G., Leibe, B., Levi, P. and Schiele, B. (2001) 3D Object Recognition from Range Images Using Local Feature Histograms. IEEE Computer Vision and Pattern Recognition, 2, 394-399.

  5. 5. Gumhold, S., Wang, X. and Macleod, R. (2001) Feature Extraction from Point Clouds. Proceeding of the 10 International Meshing Roundtable, Sandia national Laboratories, 293-305.

  6. 6. Kodani, K., Manabe, T. and Taniguchi, T. (2003) Surface Generation from Point Cloud on Surface of 3D Domain. Proceedings of Computational Engineering Conference, 8, 837-840

  7. 7. Xu, F. and Hagiwara, I. (2007) Developing of Registration System for Range Scan Data. Proceedings of the 26th Japan Simulation Conference, 15-118.

  8. 8. Johnson, A.E. and Kang, S.B. (1999) Registration and Integration of Textured 3-D Data. Image and Vision Computing, 17, 135-147.
    http://dx.doi.org/10.1016/S0262-8856(98)00117-6

  9. 9. Akca, D. (2007) Matching of 3D Surfaces and Their Intensities. ISPRS Journal of Photogrammetry and Remote Sensing, 62, 112-121.
    http://dx.doi.org/10.1016/j.isprsjprs.2006.06.001

  10. 10. Tombari, F., Salti, S. and Stefano, L.D. (2010) Unique Signatures of Histograms for Local Surface Description. Computer Vision - ECCV, 356-369.
    http://dx.doi.org/10.1007/978-3-642-15558-1_26

  11. 11. Tombari, F. and Stefano, L.D. (2012) Hough Voting for 3D Object Recognition under Occlusion and Clutter. IPSJ Computer Vision and Applications, 4, 1-10.
    http://dx.doi.org/10.2197/ipsjtcva.4.20

  12. 12. Rusu, R.B., Blodow, N., Marton, Z.C. and Beetz, M. (2008) Aligning Point Cloud Views Using Persistent Feature Histograms. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, 22-26 September 2008, 3384-3391.

  13. 13. Sun, Y., Paik, J., Koschan, A., Page, D.L. and Abidi, M.A. (2003) Point Fingerprint: A New 3-D Object Representation Scheme. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 33, 712-717.
    http://dx.doi.org/10.1109/TSMCB.2003.814295

  14. 14. Drost, B., Ulrich, M., Navab, N. and Ilic, S. (2010) Model Globally, Match Locally: Efficient and Robust 3D Object Recognition. IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, 13-18 June 2010, 998- 1005.
    http://dx.doi.org/10.1109/cvpr.2010.5540108

  15. 15. El-Khoury, S., Sahbani, A. and Perdereau, V. (2007) Learning the Natural Grasping Component of an Unknown Object. Proceedings of IEEE International Conference on Intelligent Robots and Systems, San Diego, 29 October 2007-2 November 2007, 2957-2962.

  16. 16. Curtis, N. and Xiao, J. (2008) Efficient and Effective Grasping of Novel Objects through Learning and Adapting a Knowledge Base. Proceedings of IEEE International Conference on Intelligent Robots and Systems, Nice, 22-26 September 2008, 2252-2257.
    http://dx.doi.org/10.1109/iros.2008.4651062

  17. 17. Morales, A., Recatala, G., Sanz, P.J. and del Pobil, A.P. (2001) Heuristic Vision-Based Computation of Planar Antipodal Grasps on Unknown Objects. Proceedings of IEEE International Conference on Intelligent Robots and Systems, 1, 583-588.
    http://dx.doi.org/10.1109/robot.2001.932613

  18. 18. Bone, G.M., Lambert, A. and Edwards, M. (2008) Automated Modeling and Robotic Grasping of Unknown Three-Dimensional Objects. Proceedings of IEEE International Conference on Robotics and Automation, Pasadena, 19-23 May 2008, 292-298.
    http://dx.doi.org/10.1109/robot.2008.4543223

  19. 19. Richtsfeld, M. and Vincze, M. (2008) Grasping of Unknown Objects from a Table Top. Proceedings of ECCV Workshop on Vision in Action: Efficient Strategies for Cognitive Agents in Complex Environments, Marseille, October 2008.

  20. 20. Bohg, J. and Kragic, D. (2010) Learning Grasping Points with Shape Context. Robotics and Autonomous Systems, 58, 362-377.
    http://dx.doi.org/10.1016/j.robot.2009.10.003

  21. 21. Harada, K., Kaneko, K. and Kanehiro, F. (2008) Fast Grasp Planning for Hand/Arm Systems Based on Convex Model. Proceedings of IEEE International Conference on Robotics and Automation, Pasadena, 19-23 May 2008, 1162-1168.

  22. 22. Harada, K., Nagta, K., Tsuji, T., Yamanobe, N., Nakamura, A. and Kawai, Y. (2013) Probabilistic Approach for Object Bin Picking Approximated by Cylinders. 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, 6-10 May 2013, 3727-3732.
    http://dx.doi.org/10.1109/ICRA.2013.6631103

  23. 23. Sanz, P.J., Requena, A., Inesta, J.M. and Del Pobil, A.P. (2005) Grasping the Not-so-Obvious: Vision-Based Object Handling for Industrial Applications. IEEE Robotics and Automation Magazine, 12, 44-52.
    http://dx.doi.org/10.1109/MRA.2005.1511868

  24. 24. Fuchs, S., Haddadin, S., Keller, M., Parusel, S., Kolb, A. and Suppa, M. (2010) Cooperative Bin-Picking with Time-of-Flight Camera and Impedance Controlled DLR Lightweight Robot III. Proceedings of IEEE International Conference on Intelligent Robots and Systems, Taipei, 18-22 October 2010, 4862-4867.
    http://dx.doi.org/10.1109/iros.2010.5651046

  25. 25. Besel, P. and McKay, N. (1992) A Method for Registration of 3-D Shapes. IEEE Transaction on Patter Analysis and Machine Intelligence, 14, 239-256.

  26. 26. Zhang, Z. (1994) Iterative Point Matching for Registration of Free-Form Curves and Surfaces. International Journal of Computer Vision, 13, 119-152.
    http://dx.doi.org/10.1007/BF01427149

  27. 27. Sharp, G., Lee, S. and Wehe, D. (2002) ICP Registration Using Invariant Features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 90-102.

  28. 28. Natassha, G., Leslie, I. and Szymon, R. (2003) Geometrically Stable Sampling for the ICP Algorithm. Proceedings of the 4th International Conference on 3-D Digital Imaging and Modeling (3DIM’03), Banff, 6-10 October 2003, 260-267.

  29. 29. Silva, L., Olga, R. and Kim, L. (2005) Precision Range Image Registration Using a Robust Surface Interpenetration Measure and Enhanced Genetic Algorithms. IEEE Transaction on Pattern Analysis and Machine Intelligence, 27, 762-776.