Skip to Main content Skip to Navigation

Developmental psychology inspired models for physical and social reasoning in human-robot joint action

yoan Sallami 1 
1 LAAS-RIS - Équipe Robotique et InteractionS
LAAS - Laboratoire d'analyse et d'architecture des systèmes
Abstract : In order to perform a collaborative task with a person, a robot needs to be able to reason about the objects and the people it interacts with. Developmental psychology gives a good insight into how children develop models of the world, which can help to design new robotic architectures for efficient and robust human-robot interactions. In the first place, we present an architecture based on a hybrid data structure that combines geometric and relational information and neural representations. This architecture aims to benefit from recent progress in computer vision and natural language processing while enabling efficient 3D reasoning by building on top of that a consistent 3D model of the world, which allows image rendering from any point of view in the scene. Then we explore two key reasoning modalities in the context of a human-robot joint action: physical reasoning and belief reasoning. Physical reasoning allows the robot to use Newtonian physics to reason about objects that are not visible while monitoring what is physically plausible to infer actions. In this thesis, we present a work inspired by developmental psychology in which we use a physics simulator to correct the position of perceived objects and infer the position of non-visible objects using Newtonian physics. The algorithm is also able to infer the human partner's actions by analyzing physical violations between the simulated world and the perceived one. Beliefs reasoning is another key feature for robots that assist humans. At its core, this reasoning is based on visual perspective taking: the ability to reason from the point of view of another person. In this thesis, we also show the modularity of the approach by binding ontology-based reasoners and the situation-assessment component developed that allows visual perspective-taking. This interaction allows querying entities generated by the perceptual and physical system using SPARQL language. We show interest in this approach with preliminary work on using neural-based language models that benefit from the expressiveness of SPARQL queries. We conclude with a discussion about the system's limitations and we open to future work that could lead to exciting research in this field.
Document type :
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Tuesday, March 1, 2022 - 6:03:08 PM
Last modification on : Wednesday, June 1, 2022 - 4:13:49 AM


Version validated by the jury (STAR)


  • HAL Id : tel-03356606, version 2


yoan Sallami. Developmental psychology inspired models for physical and social reasoning in human-robot joint action. Robotics [cs.RO]. Université Paul Sabatier - Toulouse III, 2021. English. ⟨NNT : 2021TOU30102⟩. ⟨tel-03356606v2⟩



Record views


Files downloads