Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Cynthia Rudin, Duke University
https://www.nature.com/articles/s42256-019-0048-x
ArXiv Version:
https://arxiv.org/pdf/1811.10154
#artificialintelligence #explainableai
#blackbox
Cynthia Rudin, Duke University
https://www.nature.com/articles/s42256-019-0048-x
ArXiv Version:
https://arxiv.org/pdf/1811.10154
#artificialintelligence #explainableai
#blackbox
Nature
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Nature Machine Intelligence - There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black...
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Cynthia Rudin, Duke University
https://www.nature.com/articles/s42256-019-0048-x
ArXiv Version:
https://arxiv.org/pdf/1811.10154
#artificialintelligence #explainableai
#blackbox
Cynthia Rudin, Duke University
https://www.nature.com/articles/s42256-019-0048-x
ArXiv Version:
https://arxiv.org/pdf/1811.10154
#artificialintelligence #explainableai
#blackbox
Nature
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Nature Machine Intelligence - There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black...