Improving robot plans during their execution, 1994. ,
, Towards autonomous robotic butlers: Lessons learned with the PR2. ICRA, pp.5568-5575, 2011.
Domain objects for continuous context-aware adaptation of service-based systems, ICWS, pp.571-578, 2013. ,
, Rosplan: Planning in the robot operating system. ICAPS, pp.333-341, 2015.
Platas-integrating planning and the action language Golog, vol.26, pp.61-67, 2012. ,
Behavior trees in robotics. Doctoral dissertation, KTH, 2017. ,
How behavior trees modularize hybrid control systems and generalize sequential behavior compositions, the subsumption architecture, and decision trees, IEEE Trans. Robotics, vol.33, pp.372-389, 2017. ,
Flexible execution of plans with choice, 2009. ,
Kind of minds, 1996. ,
Propice-Plan: Toward a unified framework for planning and execution, 1999. ,
A temporal logic-based planning and execution monitoring framework for unmanned aircraft systems, J. Autonomous Agents and Multi-Agent Syst, vol.19, pp.332-377, 2009. ,
Dynamic execution of temporally and spatially flexible reactive programs, AAAI Wksp. on Bridging the Gap between Task and Motion Planning, pp.1-8, 2010. ,
Monte-carlo planning: Theoretically fast convergence meets practical efficiency, 2013. ,
Monte-carlo tree search: To MC or to DP? ECAI, pp.321-326, 2014. ,
Logic-based robot control in highly dynamic domains, Robotics and Autonomous Systems, vol.56, pp.980-991, 2008. ,
An investigation into reactive planning in complex domains, pp.202-206, 1987. ,
Automated planning and acting, 2016. ,
URL : https://hal.archives-ouvertes.fr/hal-01959084
GOLEX-bridging the gap between logic (GOLOG) and a real robot, KI, pp.165-176, 1998. ,
A reactive model-based programming language for robotic space explorers, 2001. ,
PRS: A high level supervision and control language for autonomous mobile robots, ICRA, pp.43-49, 1996. ,
URL : https://hal.archives-ouvertes.fr/hal-01972550
Deliberation for Autonomous Robots: A Survey, Artificial Intelligence, vol.247, pp.10-44, 2017. ,
URL : https://hal.archives-ouvertes.fr/hal-01137921
An analysis of monte carlo tree search, pp.3576-3582, 2017. ,
Bandit based monte-carlo planning, ECML, pp.282-293, 2006. ,
Generating, executing, and monitoring plans with goal-based utilities in continuous domains, Advances in Cognitive Systems, pp.1-12, 2017. ,
Concurrent plan recognition and execution for human-robot teams, 2014. ,
Remote Agent: To boldly go where no AI system has gone before, Artificial Intelligence, vol.103, pp.5-47, 1998. ,
, Simple hierarchical ordered planner. IJCAI, pp.968-973, 1999.
There's more to life than making plans: Plan management in dynamic, multiagent environments, AI Mag, vol.20, pp.1-14, 1999. ,
Chance-constrained consistency for probabilistic temporal plan networks, 2014. ,
Concurrent planning and execution for autonomous robots, IEEE Control Systems, vol.12, pp.46-50, 1992. ,
A task description language for robot control, IROS, pp.1931-1937, 1998. ,
RFF: A robust, FF-based MDP planning algorithm for generating policies with low probability of failure, 2008. ,
Plan execution interchange language (PLEXIL) for executable plans and command sequences, 2005. ,
A Petri-net coordination model for an intelligent mobile robot, IEEE Trans. Syst., Man, and Cybernetics, vol.21, pp.777-789, 1991. ,
Executing reactive, model-based programs through graph-based temporal planning, 2001. ,
Ff-replan: A baseline for probabilistic planning, ICAPS, pp.352-359, 2007. ,
Probabilistic planning via determinization in hindsight, pp.1010-1016, 2008. ,
, Appendix: Description of APE-plan
, APE-plan returns a refinement tree T for ?. It starts by creating a refinement tree with a single node n labeled ? and calls a sub-routine APE-plan-Task which builds a complete refinement tree for n. APE-plan has three main sub-procedures: APE-plan-Task, APE-plan-Method and APE-planCommand. APE-plan-Task looks at b method instances for refining a task ?. It calls APE-planMethod for each of the b method instances and returns the tree with the most optimal value. Every refinement tree has a value based on probability and cost. Once APE-plan-Task has chosen a method instance m for ? , it re-labels the node n from ? to m, in the current refinement tree T. Then it simulates the steps in m one by one by calling the sub-routine APE-plan-Method. APE-plan-Method first checks whether the search has reached the maximum depth, The main procedure of APE-plan is shown in Figure 5. b, b and d are global variables representing the search breadth, sample breadth and search depth respectively. APE-plan receives as input a task ? to be planned for, a set of methods M and the current state s