Skip to Main content Skip to Navigation
Conference papers

Unifying Evaluation of Machine Learning Safety Monitors

Abstract : With the increasing use of Machine Learning (ML) in critical autonomous systems, runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations. Monitors have been proposed for different applications involving diverse perception tasks and ML models, and specific evaluation procedures and metrics are used for different contexts. This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain), the remaining safety gaps after using it (Residual Hazard), and its negative impact on the system's performance (Availability Cost). To compute these metrics, one requires to define two return functions, representing how a given ML prediction will impact expected future rewards and hazards. Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics. Experimental results on these examples show how different evaluation choices impact the perceived performance of a monitor. As our formalism requires us to formulate explicit safety assumptions, it allows us to ensure that the evaluation conducted matches the high-level system requirements.
Document type :
Conference papers
Complete list of metadata
Contributor : Joris Guerin Connect in order to contact the contributor
Submitted on : Wednesday, August 31, 2022 - 9:26:49 AM
Last modification on : Thursday, September 15, 2022 - 4:12:48 AM


Files produced by the author(s)


  • HAL Id : hal-03765273, version 1


Joris Guérin, Raul Sena Ferreira, Kevin Delmas, Jérémie Guiochet. Unifying Evaluation of Machine Learning Safety Monitors. 33rd IEEE International Symposium on Software Reliability Engineering (ISSRE 2022), Oct 2022, Charlotte, United States. ⟨hal-03765273⟩



Record views


Files downloads