MVC: A Dataset for View-Invariant Clothing Retrieval and Attribute Prediction

Kuan-Hsien Liu,, Ting-Yen Chen,, Chu-Song Chen.

Welcome to MVC Dataset.

This webpage is the easiest way to get information on the MVC dataset for all of your research and projects.


Clothing retrieval and clothing style recognition are important and practical problems. They have drawn a lot of attention in recent years. However, the clothing photos collected in existing datasets are mostly of front- or near-front view. There are no datasets designed to study the influences of different viewing angles on clothing retrieval performance. To address view-invariant clothing retrieval problem properly, we construct a challenge clothing dataset, called Multi-View Clothing dataset. This dataset not only has four different views for each clothing item, but also provides 264 attributes for describing clothing appearance. We adopt a state-of-the-art deep learning method to present baseline results for the attribute prediction and clothing retrieval performance. We also evaluate the method on a more difficult setting, cross-view exact clothing item retrieval. This dataset can be used for further studies towards view-invariant clothing retrieval.


Kuan-Hsien Liu, Ting-Yen Chen, and Chu-Song Chen. MVC: A Dataset for View-Invariant Clothing Retrieval and Attribute Prediction, ACM ICMR 2016. [Pdf]


Please notice that this dataset is made available for academic research purpose only. All the images are collected from the Internet, and the copyright belongs to the original owners.

Here, we only provide 161,260 annotated images (1920 x 2240 resolutions) with 264 attribute labels (where the total images here are different to the paper).

Matlab version
JSON version

Contact Authors

If you have any question regarding the MVC dataset, you can email us at here.