Current research interests in ICR include control theory, formal methods, learning, and robotics.

Research Topics:

Learning Based Dexterous Manipulation (2024)

Reinforcement learning has presented great potential in learning policies for dexterous manipulations. While many existing works have focused on rigid objects, it remains an open problem to manipulate articulated objects and generalize across different categories. To address these challenges, we develop a novel framework that enhances diffusion policy using linear temporal logic (LTL) representations and affordance learning to improve the learning efficiency and generalizability of articulated dexterous manipulation.

Learning End-to-End Legged Perceptive Parkour Skillss (2024)

We develop an End-to-End Legged Perceptive Parkour Skill Learning (LEEPS) framework to train quadruped robots to master parkour skills in complex environments. In particular, LEEPS incorporates a vision-based perception module equipped with multi-layered scans, supplying robots with comprehensive, precise, and adaptable information about their surroundings. Leveraging such visual data, a position-based task formulation liberates the robot from velocity constraints and directs it toward the target using innovative reward mechanisms. The resulting controller empowers an affordable quadruped robot to successfully traverse previously challenging and unprecedented obstacles. [Experiment Video]

Online Reactive Motion Planning of Quadruped Robots (2023)

Temporal logic-based motion planning has been extensively studied to address complex robotic tasks. However, existing works primarily focus on static environments or assume the robot has full observations of the environment. This limits their practical applications since real-world environments are often dynamic, and robots may suffer from partial observations. To tackle these issues, this study proposes a framework for vision-based reactive temporal logic motion planning (V-RTLMP) for robots integrated with LiDAR sensing. The V-RTLMP is designed to perform high-level linear temporal logic (LTL) tasks in unstructured dynamic environments.

Online Motion Planning with Soft Metric Interval Temporal Logic (2022)

This work proposes a control framework to consider hard constraints to enforce safety requirements and soft constraints to enable task relaxation. The metric interval temporal logic (MITL) specifications are employed to deal with time constraints. By constructing a relaxed timed product automaton, an online motion planning strategy is synthesized with a receding horizon controller to generate policies, achieving multiple objectives in decreasing order of priority 1) formally guarantee the satisfaction of hard safety constraints; 2) mostly fulfill soft timed tasks; 3) collect time-varying rewards as much as possible. [Paper] [Experiment Video]

Temporal Logic Guided Motion Primitives for Complex Manipulation Tasks with User Preferences (2022)

Existing DMP-based methods mainly focus on simple go-to-goal tasks. Motivated to handle tasks beyond point-to-point motion planning, this work presents temporal logic guided optimization of motion primitives for complex manipulation tasks with user preferences. In particular, weighted truncated linear temporal logic (wTLTL) is incorporated, which not only enables the encoding of complex tasks that involve a sequence of logically organized action plans with user preferences, but also provides a convenient and efficient means to design the cost function. The black-box optimization is then adapted to identify optimal shape parameters of DMPs to enable motion planning of robotic systems. [Experiment Video]

Deep Reinforcement Learning Based Effective Coverage Control (2022)

Dynamic coverage control is a type of cooperative control which requires a multi-agent system to dynamically monitor an area of interest over time. To develop motion control laws, most of previous works highly rely on the knowledge of system models, such as the environment model and agent kinematics/dynamics. However, acquiring an accurate model can be restrictive and even impossible in many practical applications. Another challenge is that agent often has a limited communication capability in practice. Two agents may only exchange information when they are within a certain distance. To address these challenges, a multi-agent deep reinforcement learning (MADRL) based control framework is developed.[Paper] [Experiment Video]

Modular Deep Reinforcement Learning for Continuous Motion Planning With Temporal Logic (2021)

We investigate the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP) with unknown transition probabilities over continuous state and action spaces. Linear temporal logic (LTL) is used to specify high-level tasks over infinite horizon. The novelty is to design an embedded product MDP (EP-MDP) and the proposed reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states. A modular deep deterministic policy gradient (DDPG) is then developed to generate such policies over continuous state and action spaces.[Paper] [Code & Videos]

Motion Planning With Partially Infeasible LTL Constraints (2021)

We consider online optimal motion planning of an autonomous agent subject to linear temporal logic (LTL) constraints. Since user-specified tasks may not be fully realized (i.e., partially infeasible) in a complex environment, we consider hard and soft LTL constraints, where hard constraints enforce safety requirements (e.g., avoid obstacles) while soft constraints represent tasks that can be relaxed to not strictly follow user specifications. The motion planning of the agent is to generate trajectories, in decreasing order of priority, to 1) guarantee the satisfaction of safety constraints; 2) mostly fulfill soft constraints (i.e., minimize the violation cost if desired tasks are partially infeasible); 3) locally optimize rewards collection over a finite horizon.[Paper] [Experiment Video]

Linear Temporal Logic (LTL) and Learning Based Hybrid Control of Robotic Systems

We are interested in control of hybrid systems with continuous dynamics, described by systems of differential equations, and discrete dynamics, modeled as automata or state transition graphs. Examples of hybrid systems include mobile robots, quadrotors, driverless vehicles, etc. Our approach to the analysis and control of such systems combine concepts and tools from control theory, learning algorithms, and formal methods in computer science.

Control of Networked Autonomous Systems under Network Constraints
Supported by the Air Force Office of Science Research (AFOSR)

By unifying techniques from control theory, graph theory, communication, and estimation technology, the research aims to systematically design networked control strategies for autonomous assets to perform cooperative tasks (e.g., navigation, surveillance, etc.) in a complex environment with various constraints, such as network connectivity constraints, sensor constraints, and communication bandwidth constraints.

Dynamic Emotional Behavior and Automation Reliability in the Human-Machine Social Network
Supported by the Air Force Office of Science Research (AFOSR)

​ The overarching objective is to develop an assistive technology that optimized the performance of human-robot interaction (HRI) in the presence of uncertain automation reliability and human factors, such as cognitive workload and trust in the automation.

A Privileged Sensing Framework: Revolutionizing Human-Autonomy Integration
Supported by the Autonomy Research Pilot Initiative, Department of Defense

​ Based on consequence-based privilege and confidence-based human sensing, the privileged sensing framework aims to incorporate insights into operator state, capabilities, and intention to optimally fuse inputs from both human and autonomy to provide better decisions in human-autonomy systems.

Connectivity Maintenance of networked systems

Multi-agent systems have recently emerged as an inexpensive and robust way of addressing a wide variety of tasks ranging from exploration, surveillance and reconnaissance, to cooperative construction and manipulation. The success of these stories relies on efficient information exchange and coordination between the members of the team. The main interest in this project to develop decentralized controllers for a group of autonomous agents to perform coordinated global tasks using local information.  

Vision-based Estimation and Control

Structure and Motion (SaM) estimation using a camera is a very well-known problem in robotics and computer vision research community. SaM estimation is important for robotic application such as vision-based urban navigation of an autonomous agent, manipulation of unknown and moving targets, or human-machine interaction applications. The objective in SaM estimation is to estimate the Euclidean geometry of the feature points as well as the relative motion between the camera and feature points.

Lyapunov-based Nonlinear Control

The research is focused on the development and application of a Lyapunov-based control methodology, which incorporates the full nonlinear system dynamics in the design and analysis without requiring the solution of the nonlinear equations of motion. Research efforts are specifically focused on adaptive, robust, and learning control designs for nonlinear systems to address issues related to uncertain nonlinear dynamics with limited or uncalibrated/corrupt sensor information.