Pauly, Leo and Agboh, Wisdom C. and Hogg, David C. and Fuentes, Raul (2021) O2A: One-Shot Observational Learning with Action Vectors. Frontiers in Robotics and AI, 8. ISSN 2296-9144
pubmed-zip/versions/1/package-entries/frobt-08-686368.pdf - Published Version
Download (2MB)
Abstract
We present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call “action vectors”. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O2A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website.
Item Type: | Article |
---|---|
Subjects: | Oalibrary Press > Mathematical Science |
Depositing User: | Managing Editor |
Date Deposited: | 26 Oct 2023 03:59 |
Last Modified: | 26 Oct 2023 03:59 |
URI: | http://asian.go4publish.com/id/eprint/2408 |