Web Analytics

Research

Here are some themes and techniques that we are currently work on:

Formal methods for robotics. In the past few years, formal verification techniques, such as model checking has been extensively used for analysis and control of robotic mission specification and planning. Model checking is indeed an unavoidable part of designing safety-critical systems. However, formal verification of important properties of a given system using model checking does not guarantee that the system will behave as expected during the runtime operation which falls under the scope of runtime verification. Runtime verification is a lightweight verification technique that checks whether a run of a system satisfies or violates a given correctness property. In this project, we are investigating a novel formal robotic mission requirement specification language and algorithms techniques for monitoring mission requirements at runtime. See our IEEE RA-L paper for more details.

Fault tolerant, secure and energy-efficient ML/AI hardware. To date, many energy-aware solutions such as approximate computing, neuromorphic computing have been proposed to address the energy constraints of edge AI devices. However, they are vulnerable to many reliability (e.g., permanent, and transient faults) and security threats (e.g., adversarial attacks). Approximate computing-based DL algorithms relax the abstraction with near-perfect accuracy for energy efficiency in error-resilient applications. Since approximate computing has an error-inducing nature, there is a pent-up need of exploiting the vulnerabilities of approximate DNNs (AxDNNs) against the reliability and security threats. Recently, we noticed that tuning either the reliability, energy efficiency, or robustness knob can affect each other. This project aims to develop techniques for developing energy, reliability, and robustness-aware AI hardware for safety-critical applications. See our DATE 2022, ISQED 2021, and IOLTS 2020 papers for more details.

Cybersecurity issues in virtual reality (VR) applications.. Social Virtual Reality Learning Environment (VRLE) technologies offer a new medium for flexible learning environments with geo-distributed users. Social VRLE is deployed on networked systems that need to be properly designed for performance and resilience to prevent and mitigate unique cyber attacks (e.g., immersion attacks such as occlusion attack and Chaperone file attack) and fault scenarios (e.g., network faults). The consequences of ill-suited design makes it vulnerable to security breaches and privacy leakage attacks that significantly impact users. Specifically, poor design as well as security and privacy attacks can cause disruption of user immersive experience and may lead to ‘cybersickness’. Hence, ensuring the security, privacy, and safety (SPS) is critical to enable safe and effective student learning activities in social VRLEs. In this project, we aim to investigate a transformative co-design of learning and resilient aspects of social VRLEs. See our IEEE TDSC 2021, FiCloud 2021, CCNC 2020, and CCNC 2019 papers for more detail.

Cybersecurity issues in intelligent prognostics. Recent advances in deep learning (DL) techniques and Internet-of-Things (IoT) sensors have enabled the emergence of intelligent prognostics, also known as predictive maintenance (PdM). PdM is a method to prevent asset failure before occurrence by analyzing sensor-obtained multivariate time series (MTS) data and identifying patterns using state-of-the-art DL algorithms, thus significantly reducing downtime and maintenance costs. Unfortunately, the network connectivity required for PdM also creates new targets for adversarial attacks, as they inherit the respective vulnerabilities inherent in the use of DL and sensors. Our preliminary results show that adversarial attacks pose a significant threat to PdM systems by making wrong predictions leading to catastrophic consequences for safety-critical applications. To date, the focus of designing PdM systems has always been on the accuracy of the PdM models. Thus, given the evolution of adversarial attacks, there is now a pent-up need to evaluate the impact of such attacks on PdM systems and investigate techniques for detecting them. This project aims to develop novel theories, efficient methods, and an overarching framework for designing and deploying secure PdM systems. See our ICMLA 2020, AIPR 2020, and NOMS 2020 papers for more detail.