This paper presents a method that allows mobile systems with uncertainty in motion and sensing to react to unknown environments while high-level specifications are satisfied. The method uses feedback-based information roadmaps (FIRMs) to break the curse of history associated with partially observable systems. A policy is found in a product Markov decision process created between a transition system and a Rabin automaton. The proposed solution allows the system to react to previously unknown elements in the environment.
This paper addresses the problem of path planning for multiple robots under high-level specifications given as syntactically co-safe linear temporal logic formulae. The proposed solution extends the idea of sampling methods to the multiple robot case. The method samples the configuration space of the robots to incrementally constructs a transition system that models the motion of all the robots as a group. This transition system is then combined with a Buchi automaton, representing the specification, in a Cartesian product. The product is updated with each expansion of the transition system until a solution is found.
This paper describes a method to find optimal policies for stochastic dynamic systems that maximise the probability of satisfying real-time properties. The method constructs coarse abstraction of the original system which is then modelled as a as a bounded-parameter Markov decision process. In each region of the abstraction, a sampling-based algorithm is utilised to compute local policies that allow the system to move between regions. Then, an optimal policy is found by solving a reachability problem on a Cartesian product between the abstraction and a timed automaton.