Publication Date



Open access

Embargo Period


Degree Type


Degree Name

Doctor of Philosophy (PHD)


Industrial Engineering (Engineering)

Date of Defense


First Committee Member

Ramin Moghaddass

Second Committee Member

Murat Erkoc

Third Committee Member

Mei-Ling Shyu

Fourth Committee Member

Vincent Omachonu


Real-time system monitoring and control represent two of the most important issues that characterize modern industries in critical areas of civilian and military interest, including the power grid, energy, healthcare, aerospace, and infrastructure. During the past decade, there has been a rapid development of robust dynamic system monitoring and control methods for fault diagnosis and failure prognosis. Among various monitoring and control policies, condition-based maintenance (CBM) has been studied by many researchers due to its ability to enable a large amount of monitoring data for real-time diagnostics and prognostics. A considerable amount of literature has been published on the subject, providing a large volume of dynamic system control methods. Previously published studies are limited by assumptions that can generally be distinguished into three main categories: i) predefined system failure thresholds, ii) simplified latent dynamics, and iii) unrealistic parametric forms that describe the evolution of system dynamics through time. This thesis provides an array of solution approaches that overcome the aforementioned assumptions in a smart and effective way by introducing novel quantitative frameworks for real-time monitoring, control, and decision-making for dynamic systems. The proposed frameworks are categorized into two main phases of a comprehensive framework. The first phase contains two original Bayesian filtering methods for condition monitoring and control of systems with either linear or non-linear degradation dynamics. The former is designed only for systems with linear latent and observable dynamics and utilizes Kalman filtering for state-parameter inference. It considers a failure process that is purely stochastic and is based on logistic regression. This process is directly affected by the latent system dynamics, therefore avoiding the need for a priori failure thresholds. The latter takes into consideration multiple levels of system dynamics that evolve either linearly or non-linearly. A hybrid particle filter is developed for state-parameter inference, while an Extreme Learning Machine artificial neural network is utilized to relate sensor observations to latent system dynamics. Both frameworks are tested and validated on synthetic and real-world time-series datasets. The second phase of this thesis introduces an original method for optimal control and decision-making that employs Bayesian filtering-based deep reinforcement learning with fully stochastic environments. Sets of deep reinforcement learning agents were trained to develop control policies. Bayesian filtering methods from the first phase were utilized to provide environment states that use the estimates from latent system dynamics. This method is used in two different applications for maintenance cost minimization and estimating the remaining useful life of a system under condition monitoring. Results obtained from applying the framework on simulated and real-world time-series data suggest that the proposed Bayesian filtering-based deep reinforcement learning algorithm can be trained even with limited data, which can be useful for real-time control and decision making for many dynamic systems.


Bayesian filtering; Reinforcement learning; dynamic control; System maintenance