Explainable AI




Explainable AI

A crticism of artificial intelligence is that it is a black box, meaning that the process by which it transforms inputs to outputs is opaque. This opacity is not acceptable in healthcare where accountability requires transparency and, sometimes, the process is as important as the output.

1 published a preprint where they provide greater insight into what features a neural network uses to classify dermatologic lesions. Their approach is to identify a latent space with statistically independent dimensions.

Interval Calibration

Latent Spaces

Longer discussion here

  • How to do with language
  • Counterfactual reasoning
  • Similarity to Eve Marder’s work
  • Could also look at latent spaces across receptors

Bibliography

  1. 1.Thiagarajan, J. J., Sattigeri, P., Rajan, D. & Venkatesh, B. Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models. (2020).