Web Analytics

Research

Here are some themes and techniques that we are currently work on:

Formal methods for robotics. In the past few years, formal verification techniques, such as model checking, have been extensively used to analyze and control robotic mission specifications and planning. This project explores security- and privacy-aware motion planning using SMT-based model checking. Model checking is indeed an unavoidable part of designing safety-critical systems. However, formal verification of important properties of a given system using model checking does not guarantee that the system will behave as expected during the runtime operation, which falls under the scope of runtime verification. Runtime verification is a lightweight verification technique that checks whether a system’s run satisfies or violates a given correctness property. In this project, we also investigate a novel formal robotic mission requirement specification language and algorithms for monitoring mission requirements at runtime. See our IEEE RA-L paper for more details.

Fault tolerant, secure and energy-efficient ML/AI hardware. To date, many energy-aware solutions, such as approximate computing and neuromorphic computing, have been proposed to address the energy constraints of edge AI devices. However, they are vulnerable to many reliability (e.g., permanent and transient faults) and security threats (e.g., adversarial attacks). Approximate computing-based DL algorithms relax the abstraction with near-perfect accuracy for energy efficiency in error-resilient applications. Since approximate computing has an error-inducing nature, there is a pent-up need to exploit the vulnerabilities of approximate DNNs (AxDNNs) against reliability and security threats. Recently, we noticed that tuning the reliability, energy efficiency, or robustness knob can affect each other. This project aims to develop techniques for developing energy, reliability, and robustness-aware AI hardware for safety-critical applications. See our DATE 2022, ISQED 2021, and IOLTS 2020 papers for more details.

Cybersecurity issues in virtual reality (VR) applications.. Social Virtual Reality Learning Environment (VRLE) technologies offer a new medium for flexible learning environments with geo-distributed users. Social VRLE is deployed on networked systems that need to be correctly designed for performance and resilience to prevent and mitigate unique cyber attacks (e.g., immersion attacks such as occlusion attacks and Chaperone file attacks) and fault scenarios (e.g., network faults). The consequences of ill-suited design make it vulnerable to security breaches and privacy leakage attacks that significantly impact users. Specifically, poor design and security and privacy attacks can cause disruption of user immersive experience and may lead to ‘cybersickness’. Hence, ensuring security, privacy, and safety (SPS) is critical to enabling safe and effective student learning activities in social VRLEs. In this project, we aim to investigate a transformative co-design of learning and resilient aspects of social VRLEs. See our IEEE TDSC 2021, FiCloud 2021, CCNC 2020, and CCNC 2019 papers for more detail.

Cybersecurity issues in intelligent prognostics. Recent advances in deep learning (DL) techniques and Internet-of-Things (IoT) sensors have enabled the emergence of intelligent prognostics, also known as predictive maintenance (PdM). PdM is a method to prevent asset failure before its occurrence by analyzing sensor-obtained multivariate time series (MTS) data and identifying patterns using state-of-the-art DL algorithms, thus significantly reducing downtime and maintenance costs. Unfortunately, the network connectivity required for PdM also creates new targets for adversarial attacks, as they inherit the respective vulnerabilities inherent in using DL and sensors. Our preliminary results show that adversarial attacks pose a significant threat to PdM systems by making wrong predictions leading to catastrophic consequences for safety-critical applications. To date, the focus of designing PdM systems has always been on the accuracy of the PdM models. Thus, given the evolution of adversarial attacks, there is now a pent-up need to evaluate the impact of such attacks on PdM systems and investigate techniques for detecting them. This project aims to develop novel theories, efficient methods, and an overarching framework for designing and deploying secure PdM systems. See our ICMLA 2020, AIPR 2020, and NOMS 2020 papers for more detail.