After hosting challenges in the domain of landmarks for the past four years, this year we introduce the first competition in image representations that should work across many object types.
Image representations are a critical building block of computer vision applications. Traditionally, research on image embedding learning has been conducted with a focus on per-domain models. Generally, papers propose generic embedding learning techniques which are applied to different domains separately, rather than developing generic embedding models which could be applied to all domains combined.
In this competition, the developed models are expected to retrieve relevant database images to a given query image (ie, the model should retrieve database images containing the same object as the query). The images in our dataset comprise a variety of object types, such as apparel, artwork, landmarks, furniture, packaged goods, among others.
This year’s competition is structured in a representation learning format: you will create a model that extracts a feature embedding for the images and submit the model via Kaggle Notebooks. Kaggle will run your model on a held-out test set, perform a k-nearest-neighbors lookup, and score the resulting embedding quality. Both Tensorflow and PyTorch models are supported.
- 1st Place – $ 15,000
- 2nd Place – $ 10,000
- 3rd Place – $ 8,000
- 4th Place – $ 7,000
- 5th Place – $ 5,000
- 6th Place – $ 5,000