Towards Reliable Grasping and Manipulation in Household

Towards Reliable Grasping And Manipulation In Household-PDF Download

  • Date:16 Sep 2020
  • Views:8
  • Downloads:0
  • Pages:12
  • Size:857.62 KB

Share Pdf : Towards Reliable Grasping And Manipulation In Household

Download and Preview : Towards Reliable Grasping And Manipulation In Household


Report CopyRight/DMCA Form For : Towards Reliable Grasping And Manipulation In Household


Transcription:

2 M Ciocarlie K Hsiao E G Jones S Chitta R B Rusu and I A S ucan. the ability to grasp and manipulate both known and unknown objects in a reliable. and repeatable manner The combination of object recognition algorithms and. extensive pre computed knowledge bases has the potential to extend a robot s. capabilities and the range of achievable tasks However a robot operating in a. human environment is likely to also be faced with situations or objects never. encountered before, reliable operation in a wide range of scenarios requiring robustness to real world. problems such as imperfect calibration or trajectory following. safe operation in a wide variety settings In particular a manipulation task should. be collision free for both the robot itself and the object that it is manipulating. Achieving this type of functionality has required the integration of multiple mod. ules each charged with its own subtask such as,scene segmentation and object recognition. collision environment acquisition and maintenance,grasp planning for both known and unknown objects. collision free arm motion planning, tactile sensing for error correction during grasp execution. It is important to note that each of these goals can be considered a research area. in its own right Furthermore in addition to the technical challenges posed by each. sub task their integration reveals the interplay and reciprocal constraints between. them One of the main features of this study is that it reports on an integrated system. allowing us to share the lessons learned regarding the importance of each component. as well as the potential pitfalls of combining them into a complete platform. The integration of the multiple modules presented in this study was done us. ing the Robot Operating System ROS In addition the complete architecture is. included in the current ROS distribution1 We hope that it will prove a useful tool. both to researchers aiming to improve manipulation capabilities who can focus on. one or more particular components of our architecture and those attempting to build. towards more complex applications who can use the complete system as a building. There are a number of complete robot platforms that have demonstrated com. bined perception and action to manipulate objects autonomously in human environ. ments such as 7 16 15 6 9 17 1 Preliminary results based on our approach. were also presented in 12 In this study we expand on our previous efforts by. adding a number of components such as grasp planning for a wide variety of ob. jects tactile feedback during task execution etc, 1 The complete codebase used for achieving the results presented in this paper is available as.
part of the ROS C Turtle distribution See http www ros org for general ROS informa. tion and http www ros org wiki pr2 tabletop manipulation apps for doc. umentation of the relevant code packages, Towards Reliable Grasping and Manipulation in Household Environments 3. Collision Map,Generation,Grasp Planning,Scene for unknown objects. Grasp Motion,Interpretor,Selection Planning,Grasp Planning. Object Model for known objects,Registration Grasp Planning Execution. 3D Perception,Object Model,Database Feedback,Grasp Execution.
Fig 1 Top The PR2 robot platform Bottom Our system architecture. 2 Technical approach, The overall structure of our system is shown in Fig 1 in this section we will provide. additional details on each individual component The hardware used for implemen. tation is the PR2 personal robot which has an omni directional base and two 7 DOF. arms It is also equipped with a tilting laser scanner mounted on the head two stereo. cameras a fixed laser scanner mounted on the base and a body mounted IMU En. coders provide position information for each joint The end effector is a parallel jaw. gripper equipped with fingertip capacitive sensor arrays each consisting of 22 indi. vidual cells 15 in a 5x3 array on the front of each sensor 2 on each side 2 at the. tip and 1 in the back as shown in Figure 2,2 1 Semantic Perception and Object Segmentation. The sensory input to our system is in the form of 3D point cloud data that on. the PR2 robot comes from laser range sensors and stereo cameras The first step. 4 M Ciocarlie K Hsiao E G Jones S Chitta R B Rusu and I A S ucan. 5x3 array of,elements on,Two elements,on each side. One element,O l at pressure,the tip off th,the Two elements. back at the tip sensors,Fig 2 The PR2 gripper and its fingertip sensors.
consists of processing this data to obtain semantic information with the goal of. segmenting a complete image of the environment into individual graspable objects. As household objects in domestic environments are usually found on flat planar. surfaces we exploit this structure and obtain additional semantic information by. computing a planar fit of the surface that provides support for the objects Euclidean. clustering on the points above the planar support provides the list of graspable ob. jects in the scene, In addition our system attempts to match each segmented object against a. database of known 3D models using an iterative technique similar to the ICP al. gorithm 2 Our current matching algorithm operates in a 2 DOF space and can. therefore be applied for situations where partial pose information is known e g. rotationally symmetrical objects such as cups glasses or bowls resting on a flat sur. face If the match between a segmented point cloud and a model in the database. exceeds a certain quality threshold the object is assumed to be recognized Fig 3. shows an example of a complete scene with semantic information including the. table plane and both recognized and unknown objects. 2 2 Collision Environment, In order to operate the robot safely in the environment the system depends on a. comprehensive view of possible collisions The semantic perception block provides. information about recognized objects and the table plane while data from a wider. view sensor like the tilting laser on the PR2 is used to generate a binary 3D oc. cupancy grid in the arm s workspace The occupied cells near recognized objects. are filtered to take advantage of the higher resolution information available from. the semantic perception component Point cloud sources must also be self filtered. so that collisions points associated with the robot body or any grasped objects are. not placed in the collision map The combined collision environment consists of ori. Towards Reliable Grasping and Manipulation in Household Environments 5. Fig 3 Left A visualization of the collision environment including points associated with unrec. ognized objects blue and obstacles with semantic information green Center Detail showing. 3D point cloud data grey with 3D meshes superimposed for recognized objects The table and. unrecognized objects are represented as box shape primitives Right After a successful object. grasp the bounding box associated with an unrecognized object brown is attached to the right. gripper Point cloud data associated with the attached object will be filtered from the collision map. and the attached object will be accounted for in motion planning to ensure that it does not contact. environment obstacles or the robot s body, ented bounding boxes for occupied cells box primitives for the dominant table plane. and for bounding boxes around unrecognized point clusters and triangle meshes for. the robot s links and any recognized objects Fig 3 Additionally if the robot has. grasped any objects those will be associated with the robot s body associated points. will be filtered from point clouds before addition to the collision map and the ob. ject is assumed to move with the robot s end effector The collision environment is. used in grasp selection to perform collision aware inverse kinematics as well as in. motion planning to check arm trajectories against possible collisions. While there are a number of potential challenges in constructing an accurate and. complete collision representation of the environment a paramount concern is cop. ing with reduced visibility from sensors on the head and body when the arms are. operating in the robot s workspace The approach we take is to use motion plan. ning to move the arms out of the workspace and then to acquire a static unoccluded. collision map This map when supplemented with recognized objects is used un. changed for the duration of a grasping or manipulation task While this ensures that. we get a holistic view of the environment it comes at the price of requiring applica. tions to move the arms from the workspace periodically delay while waiting for full. laser scans from the tilting laser reduced reactivity to dynamic obstacles and lack. of responsiveness to the robot s own actions if the robot accidentally knocks over. an object its new location will not be correctly represented in the collision map Mit. igating these shortcoming and improving the collision environment representation. are active areas of future work please see Section 4 for details. 6 M Ciocarlie K Hsiao E G Jones S Chitta R B Rusu and I A S ucan. 2 3 Grasp Planning and Selection, The goal of the grasp planning component is for every object segmented from the. environment to generate a list of possible grasps each consisting of a gripper pose. relative to the object we note that for more dexterous hands a grasp would also. have to contain information regarding finger posture The current version of our. grasping planning component provides separate methods for creating such a list. based on whether the object is recognized as one of the models in our database or. treated as an unknown point cluster, All the known objects in our model database are annotated with large sets of.
stable grasp points pre computed using the GraspIt simulator 8 In our current. release the definition of a stable grasp is specific to the gripper of the PR2 robot. requiring both finger pads to be aligned with the surface of the object and further. rewarding postures where the palm of the gripper is close to the object as well Our. grasp planning tool uses a simulated annealing optimization performed in simula. tion to search for gripper poses relative to the object that satisfy this quality metric. For each object this optimization was allowed to run over 4 hours resulting in an. average of 600 grasp points for each object An example of this process is shown in. Grasps for unrecognized objects are computed at run time from 3D sensor data. using heuristics based on both the overall shape of the object and its local features. The intuition behind this approach is that many human designed objects can be. grasped by aligning the hand with the object principal axes starting from either. above or to the side of the object and trying to grasp it around the center If the center. is not graspable any other part that fits inside the hand can be attempted along. similar guidelines Grasps found according to these principles are then ranked using. a small set of simple feature weights including the number of sensed object points. that fit inside the gripper distance from object center etc A number of examples. are shown in Fig 4 and additional information about this component can be found. Once the list of possible grasps has been populated execution proceeds in a sim. ilar fashion regardless of which grasp planner was used Each of the grasps in the. list is tested for feasibility in the current environment this includes collision checks. for both the gripper and the arm against potential obstacles as well as generation of. a collision free arm motion plan for placing the gripper in the desired pose. 2 4 Motion Planning, Motion planning is used throughout the pipeline to ensure collision free paths for. grasping and placing objects Motion planning takes into account collisions with the. environment as well as joint limits and self collisions of the robot s arms Any object. grasped by the robot is included in the robot model in order to avoid collisions. between a grasped object and the rest of the environment during transport. Towards Reliable Grasping and Manipulation in Household Environments 7. Fig 4 Grasp Planning Top row grasp planning in a simulated environment for a known object the. object model a simulated grasp and the complete set of pre computed grasps Middle row grasp. planning from 3D sensor data for novel objects Bottom row grasp execution for novel objects. Collision aware inverse kinematics is first used to determine the feasibility of a. grasp by finding a collision free solution for the desired gripper configuration The. PR2 s arms have 7 degrees of freedom each Thus there is one redundant degree of. freedom available that needs to be resolved in some manner We choose to parame. terize this extra degree of freedom using the shoulder roll joint of the PR2 arm and. search over this redundant space for a collision free solution corresponding to the. desired end effector pose, Sampling based planning is then combined with collision aware inverse kine. matics to plan motions to the desired poses for grasping lifting and placing objects. For grasping paths are planned to a pre grasp location that is offset from the desired. grasp location followed by a straight line path in Cartesian space to the grasp pose. Objects that have been grasped are attached to the robot model to avoid collisions. Towards Reliable Grasping and Manipulation in Household Environments Matei Ciocarlie Kaijen Hsiao E Gil Jones Sachin Chitta Radu Bogdan Rusu and Ioan A S ucan Abstract We present a complete software architecture for reliable grasping of household objects Our work combines aspects such as scene interpretation from 3D range data grasp planning motion planning and grasp failure

Related Books