BSAM DATASETS

Citation Author(s):
Jinfeng
Xu
Submitted by:
Jinfeng Xu
Last updated:
Thu, 01/09/2025 - 09:40
DOI:
10.21227/rb49-db43
License:
1 Views
Categories:
Keywords:
0
0 ratings - Please login to submit your rating.

Abstract 

All multimodal recommendation datasets used in the manuscript Enhancing Robustness and Generalization Capability for Multimodal Recommender Systems via Sharpness-Aware Minimization (BSAM), which includes five Amazon datasets. Each dataset includes both visual and textual modalities. Baby, Sports, Clothing, Pet, and Office from Amazon. All the datasets comprise textual and visual features in the form of item descriptions and images. Our data preprocessing methodology follows the approach outlined in the MMRec Framework. We follow the popular evaluation setting with random data splitting 8:1:1 for training, validation, and testing. We filter out all the missing modality information to ensure all item has both visual and textual modalities. 

Instructions: 

We provide two processed multimodal recommendation datasets, which can be directly used in widely-used recommendation frameworks, such as RecBole, MMRec, etc.

Dataset Files

    Files have not been uploaded for this dataset