Paper accepted for publication in the proceedings of NeurIPS2022
We present an approach to quantify both aleatory and epistemic uncertainty for deep neural networks in image classification based on generative adversarial networks (GANs). While most work in the literature that uses GANs to generate out-of-distribution (OoD) examples focuses only on evaluating OoD detection, we present a GAN-based approach to learning a classifier that produces appropriate uncertainties for both OoD examples and false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples, which is the state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, specifically on CIFAR10, CIFAR100 and Tiny ImageNet, we improve the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we find that the generated GAN examples do not significantly affect the calibration error of our classifier and lead to a significant gain in model accuracy. The preprint is available here: arxiv.org/abs/2201.13279