Real-time risk monitoring: From batch processes to live analytics
Synopsis
The term 'real-time analytics' has recently seen a large increase in the frequency of its use in both the academic and popular press. The availability of data from a large number of disparate sources is now spawning hundreds of articles that predict future trends or needs, or that analyze either the past or the present in the attempt to beat competitors and obtain maximum profits. It's amongst the present that our emphasis lies: we are passionate about devising models and systems that can help to recognize problems on a real-time basis. As healthcare professionals, we are especially interested in potentially using huge datasets in the effort to create 'early warning' systems that can allow monitoring in very high-risk patients and avoid preventable deteriorations. These might then potentially enable us to develop interventions aimed at enhancing the patient's life, thus avoiding extended hospitalization and containing hospital costs.
The key to real-time monitoring is to have a perception of what is happening every minute, as well as knowing how important that minute is. That is not a trivial task, especially for processes that can have a large number of decisions external to the control system. Even when the control system is itself the focus of analysis, most statistical process control and quality-related literature is concerned with finding suitable models that can describe the historic pattern of variability. The availability of advanced modeling methods, together with the increase of computing power, has given rise to a new class of models that can handle far larger numbers of predictors. These methods also permit automatic fitting in a wide variety of settings. They are referred to as models and look for linear and nonlinear relationships between input and target data over very short or fractional time intervals. By their very nature, they allow real-time estimates of where an automated system is 'lost in the woods', or whether an activity is going in the right direction. A less transparent use of conventional models can be the real-time monitoring of business data using model-based quality function-related time series. With a forecast-driven approach, we show how these models can be selective in the search for starting points of process malfunctions. These time series models have no intention at all to capture full information at a detailed level, but are constructed by building in the biggest disturbances as a tailwind, thus generating a huge potential for 'error-catching' situations, while at the same time aiming at a measure that gives enough warning of shifting dynamics.