A Human-In-the-Loop, LTL Planning and Control ROS Package - Factory Setting Demonstration
This video demonstrates our new ROS package for implementing human-in-the-loop focused LTL-based robot planner. In this video, we apply the software on a multi-agent factory setup in which heterogeneous robot agents are tasked with transporting packages. Each set of robot agents (including mobile bases and robotic manipulators) is given an LTL-based high level task, and the software outputs the plan for each robot. The human-in-the-loop components of the software are used to allow a human to adjust the plan online, while ensuring safety of the overall system. The demonstration shows the human adjusting the package transport plan to allow for package inspection. Also, the LTL plan allows for more complex tasks including re-charing the battery when the battery state is low. The ROS package code used in this demonstration can be found at: https://github.com/KTH-DHSG/ltl_automaton_core
A Human-In-the-Loop, LTL Planning and Control ROS Package - Turtlebot Example Demonstration
This video demonstrates our new ROS package for implementing human-in-the-loop focused LTL-based robot planner. In this video, we show a simple example demonstrating the human-in-the-loop mixed initiative control and inverse reinforcment learning ROS plug ins. For a more complex implementation of this software, please see the above video. The ROS package code used in this demonstration can be found at: https://github.com/KTH-DHSG/ltl_automaton_core
Online Human-in-the-Loop Control of UAVs Under LTL Tasks with Moving Obstacles
The experiment shows a scenario in which a group of UAVs is given a surveillance tasks in the form of Linear Temporal Logic (LTL). The agents are also tasked with avoiding regions occupied with obstacles (turtlebot). The UAVs are aware of each other's plans, but not of the moving obstacle. As the obstacle enters the workspace, each agent must re-plan online to satisfy the surveillance task, while avoiding the rogue obstacle. Additionally, the human operator is allowed to control the UAVs via mixed-initiative control. In this setting, the human is allowed to control the UAVs as long as the LTL task is satisfied. In the event the human is incorrectly operating the system (e.g attempting to crash into the obstacle), the mixed-initiative control overrides the human input and ensures satisfaction of the high level task.
Identifying and Reacting to Attention, Hearing, Understanding and Acceptance in a Poster Presentation Scenario
Footage from an experiment performed at KTH during April and May of 2019. Four examples of human reactions to a system presenting a painting are shown. The presenting system, in turn, reacts to the reactions of the test participants. Each scene is annotated and described briefly in the video.
Online task re-assignment of UAVs under LTL specification
The experiment shows a scenario in which an operator can update a Linear Temporal Logic task online for a group of UAVs. Initially, the operator specifies LTL formulas for both UAVs. The first UAV is tasked to surveil regions in yellow, while the second UAV is tasked to surveil regions in red. During their surveillance, the operator updates the LTL formulas online such that both UAVs update their behaviors accordingly.
Human-in-the-Loop Mixed-Initiative Control under Temporal Tasks
It presents the motion control and task planning problem of mobile robots under complex high-level tasks and human initiatives. The assigned task is specified as Linear Temporal Logic (LTL) formulas that consist of hard and soft constraints. The human initiative influences the robot autonomy in two explicit ways: with additive terms in the continuous control input and with contingent temporary-task assignments. We propose an online coordination scheme that encapsulates (i) a mixed-initiative continuous controller that ensures all-time safety despite of possible human errors, (ii) a plan adaptation scheme that accommodates new features discovered in the workspace and the short-term tasks assigned by the human operator during run time, and (iii) an iterative inverse reinforcement learning (IRL) algorithm that allows the robot to asymptotically learn the human preference on the weighting parameters during the plan synthesis.
LTL Patrolling on a Human/Robot Mixed Environment
The experiment shows a scenario in which an operator defines a Linear Temporal Logic patrolling task for a group of robots. Each robot fulfills its own task while navigating in an environment containing obstacles, walking humans, and other robots. Green regions are the starting poses, while red regions are target poses for the robots.
Multi agent controller synthesis under LTL specifications
It presents a new approach for control synthesis of non-deterministic transition systems under Linear Temporal Logic specifications with applications to multiple Unmanned Aerial Vehicles (UAV) motion planning problems. The consideration of such systems is motivated by the non-determinism possibly introduced while abstracting dynamical systems into finite transition systems. More precisely, we consider transition systems enhanced with a progress set describing the fact that the system cannot stay indefinitely in some subset of states. The control synthesis problem is firstly translated into a terminating planning problem. Then, a backward reachability strategy searches for a path from the initial set to the goal set. At each iteration, subsets of states contained in the progress set are added to the path, thus ensuring the reachability to the goal set in finite time. If a solution to the terminating problem is found, the obtained controller is translated back to the initial problem formulation.
Decentralized Motion Planning with Collision Avoidance for a Team of UAVs under High Level Goals
This experiment addresses the motion planning problem for a team of aerial agents under high level goals. We propose a hybrid control strategy that guarantees the accomplishment of each agent’s local goal specification, which is given as a temporal logic formula, while guaranteeing inter-agent collision avoidance. In particular, by defining 3-D spheres that bound the agents’ volume, we extend previous work on decentralized navigation functions and propose control laws that navigate the agents among predefined regions of interest of the workspace while avoiding collision with each other. This allows us to abstract the motion of the agents as finite transition systems and, by employing standard formal verification techniques, to derive a high-level control algorithm that satisfies the agents’ specifications.
Disambiguating Verbal Requests in Human-Robot Interaction
We develop a system that, in the presence of ambiguous verbal requests (i.e., more than one object matches the color or shape described by the user), highlights the candidate objects using the visualization interface while awaiting for further verbal commands that refine the request. This process continues until there is no more ambiguity and the robot is able to pick up the target object. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions – mixed reality, augmented reality, and a monitor as the baseline – using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference.
Personalized Human-Robot Co-Adaptation in Instructional Settings using Reinforcement Learning
It presents a Reinforcement Learning (RL) framework for personalization that selects a robot’s supportive behaviors to maximize user’s task performance in a learning scenario where a Pepper robot acting as a tutor helps people learning how to solve grid-based logic puzzles. This approach is based on mechanisms for reward estimation that are embedded in the human-robot interaction cycle and are defined in a RL framework for policy learning.
Implicit Probes for Measuring Human-Robot Engagement
It investigates the use of tangible implicit probes to gauge the user’s social engagement with a robot. Users’ paying attention to the robot’s implicit probes is related to higher social engagement and can lead to a more positive interaction with a robot. In this experiment, the user is asked to assemble a LEGO structure with the help of a robot. The results indicate that using implicit probes may be possible because there is a strong relation between paying attention to the implicit probes and the users’ own assessment of their social engagement.