Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of $\beta$-VAE
Paper
03/16/2020
Learning Enabled Components (LECs) are widely being used in a variety of perceptions based autonomy tasks like image segmentation, object detection, end-to-end driving, etc. These components are trained with large image datasets with multimodal factors like weather conditions, time-of-day, traffic-density, etc. The LECs learn from these factors during training, and while testing if there is variation in any of these factors, the components get confused resulting in low confidence predictions. Those images with factor values, not seen, during training are commonly referred to as Out-of-Distribution (OOD). For safe autonomy, it is important to identify the OOD images, so that a suitable mitigation strategy can be performed. Classical one-class classifiers like SVM and SVDD are used to perform OOD detection. However, multiple labels attached to images in these datasets restrict the direct application of these techniques. We address this problem using the latent space of the $\beta$-Variational Autoencoder ($\beta$-VAE). We use the fact that compact latent space generated by an appropriately selected $\beta$-VAE will encode the information about these factors in a few latent variables, and that can be used for quick and computationally inexpensive detection. We evaluate our approach on the dataset, and our results show the latent space of $\beta$-VAE is sensitive to encode changes in the values of the generative factor.