Cross-modal Hallucination for Few-shot Fine-grained Recognition

Frederik Pahde, Tassilo Klein and Moin Nabi

SAP AI Research
SAP AI Research
1 min readJun 7, 2018

--

Conference on Computer Vision and Pattern Recognition (CVPR 2018), Workshop on Fine-grained Visual Categorization, Salt Lake City, USA

State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection.

Related blog post: Deep Few-Shot Learning

--

--