‘‘COIN’’ is a Swedish Foundation for Strategic Research. (SSF) titled “Co-adaptive human-robot interactive systems”. The Consortium consists of Division of Decision and Control Systems, Division of Robotics, Perception and Learning, and Division of Speech, Music and Hearing at KTH (Stockholm, Sweden); and the Social Robotics Lab at Uppsala University (Uppsala, Sweden).
July 29: Our summary paper of the COIN project has been accepted for publication!.
Mar. 22: New Demo videos added: A Human-In-the-Loop, LTL Planning and Control ROS Package - Turtlebot Example Demonstration and A Human-In-the-Loop, LTL Planning and Control ROS Package - Factory Setting Demonstration!.
Jan. 12: Consortium Meeting held over Zoom.
June 10: New Demo video: Online Human-in-the-Loop Control of UAVs Under LTL Tasks with Moving Obstacles!
Mar. 20: Annual Report for 2019 submitted.
Dec. 11: New Demo video: The RDG-Map Scenario: A Pedagogical Reference Resolution Agent!
Aug. 19: New Demo video: Identifying and Reacting to Attention, Hearing, Understanding and Acceptance in a Poster Presentation Scenario!
Aug. 16: New Demo video: Online task re-assignment of UAVs under LTL specification!
Mar. 15: Project Meeting held at KTH.
Sep. 20: New Demo video: Disambiguating Verbal Requests in Human-Robot Interaction!
Sep. 3-4: COIN project dissemination at Norköpping SSF Machine and Other Intelligence Conference.
Aug. 30: New Demo video: LTL Patrolling on a Human/Robot Mixed Environment!
June 11: Project Meeting held at KTH.
Jan. 26: New Demo video: Human-in-the-Loop Mixed-Initiative Control under Temporal Tasks!
Jan. 25: Project Meeting held at KTH.
Sep. 1: First Meeting of the SSF Program Committee.
Aug. 30: Project Meeting held at KTH.
Apr. 6: Project Meeting held at Uppsala University.
July. 1: Project starts! check out www.coinssf.se!
Within COIN, our goal is to develop a systematic, bi-directional short- and long-term adaptive framework that yields safe, effective, efficient, and socially acceptable robot behaviors and human-robot interactions. It contains the following three concrete scientific objectives:
Human-in-the-loop plan and control synthesis with guarantees;
Long-term learning and co-adaptation for personalized autonomy;
Short term multi-modal interaction.