Learning of Efficient Stable Robot Grasping Approach Using Transformer-based Control Policy

1University of Genoa, 2National University of Singapore, 3King’s College London, 4A*STAR

Abstract

Measuring grasp stability is an important skill for dexterous robot manipulation tasks, which can be inferred from haptic information with a tactile sensor. Control policies have to detect rotational displacement and slippage from tactile feedback, and determine a re-grasp strategy in term of location and force. Classic stable grasp task only trains control policies to solve for re-grasp location with objects of fixed center of gravity. In this work, we propose a revamped version of stable grasp task that optimises both re-grasp location and gripping force for objects with unknown and moving center of gravity. We tackle this task with a model-free, end-to-end Transformer-based reinforcement learning framework. We show that our approach is able to solve both objectives after training in both simulation and in a real-world setup with zero-shot transfer. We also provide performance analysis of different models to understand the dynamics of optimizing two opposing objectives.

On Real Robot and Sensor

BibTeX


      @misc{puang2024learningstablerobotgrasping,
      title={Learning Stable Robot Grasping with Transformer-based Tactile Control Policies}, 
      author={En Yen Puang and Zechen Li and Chee Meng Chew and Shan Luo and Yan Wu},
      year={2024},
      eprint={2407.21172},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2407.21172}, 
      }