Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning
Download Full Text
Contributing USMA Research Unit(s)
Army Cyber Institute
The automation of safety-critical systems is becoming increasingly prevalent as machine learning approaches become more sophisticated and capable. However, approaches that are safe to use in critical systems must account for uncertainty. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In order to place systems in critical infrastructure, we must be able to understand and interpret how machines make decisions. This need is so that they can provide support for human decision-making, as well as the potential to operate autonomously. As such, we highlight the importance of incorporating uncertainty into the decision-making process and present the advantages of Bayesian decision theory. We showcase an example of classifying vehicles from their acoustic recordings, where certain classes have significantly higher threat levels. We show how carefully adopting the Bayesian paradigm not only leads to safer decisions, but also provides a clear distinction between the roles of the machine learning expert and the domain expert.
decision making, uncertainty, machine learning, artificial intelligence, AI, safety, Bayesian decision theory, acoustic classification, uncertainty quantification
Cobb A.D., Jalaian B., Bastian N.D., Russell S. (2021) Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning. In: Lawless W.F., Mittu R., Sofge D.A., Shortell T., McDermott T.A. (eds) Systems Engineering and Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-77283-3_19
Record links to items hosted by external providers may require fee for full-text.