Language Representations Can be What Recommenders Need: Findings and Potentials

核心思想

20250323131310

Linear

20250323132631

AlphaRec

20250323133416

其它潜力

20250323133753

20250323133719

个人测试

Movies

# AlphaRec
root: ../../data
dataset: AmazonMovies_Alpha
tasktag: Matching

embedding_dim: 64
num_layers: 2

epochs: 500
batch_size: 4096
optimizer: adam
lr: 5.e-4
weight_decay: 1.e-6

tau: 0.15
num_negs: 256
projector: mlp

monitors: [LOSS, Recall@1, Recall@10, Recall@20, HitRate@10, HitRate@20, NDCG@10, NDCG@20]
which4best: Recall@20
# LightGCN
root: ../../data
dataset: AmazonMovies_Alpha
tasktag: Matching

embedding_dim: 64
num_layers: 2

epochs: 1000
batch_size: 2048
optimizer: adam
lr: 1.e-3
weight_decay: 1.e-3

monitors: [LOSS, Recall@1, Recall@10, Recall@20, HitRate@10, HitRate@20, NDCG@10, NDCG@20]
which4best: NDCG@20

Beauty

# LightGCN

root: ../../data
dataset: Amazon2014Beauty_550811_ROU
tasktag: Matching

embedding_dim: 64
num_layers: 3

epochs: 1000
batch_size: 2048
optimizer: adam
lr: 1.e-3
weight_decay: 1.e-3

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
# LightGCN + InfoNCE

root: ../../data
dataset: Amazon2014Beauty_550811_ROU
tasktag: Matching

embedding_dim: 64
num_layers: 3
num_negs: 256
tau: 0.15

epochs: 500
batch_size: 2048
optimizer: adam
lr: 5.e-4
weight_decay: 1.e-2

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
root: ../../data
dataset: Amazon2014Beauty_550811_ROU
tasktag: Matching

embedding_dim: 64
num_layers: 3
tfile: llama2_7b_title.pkl # llama2_13b_title.pkl

epochs: 500
batch_size: 2048
optimizer: adam
lr: 5.e-4
weight_decay: 0.

tau: 0.15
num_negs: 256
projector: mlp

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
MethodR@1R@10R@20N@10N@20
LightGCN0.00790.05380.08360.02820.0361
LightGCN+InfoNCE0.00980.05440.08290.02960.0371
AlphaRec (Llama2-7B)0.01040.06180.09250.03300.0412
AlphaRec (Llama2-13B)0.01070.06080.09210.03290.0412
AlphaRec (MiniLM-L12-v2)0.01000.06080.09300.03220.0407

Baby

# LightGCN
root: ../../data
dataset: Amazon2014Baby_550811_RAU
tasktag: Matching

embedding_dim: 64
# num_layers: 3
# num_negs: 256
# tau: 0.25

epochs: 100
batch_size: 2048
optimizer: adam
lr: 1.e-3
weight_decay: 5.e-3

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
# LightGCN + InfoNCE
root: ../../data
dataset: Amazon2014Baby_550811_RAU
tasktag: Matching

embedding_dim: 64
num_layers: 3
num_negs: 256
tau: 0.25

epochs: 500
batch_size: 2048
optimizer: adam
lr: 1.e-3
weight_decay: 1.e-3

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
# AlphaRec
root: ../../data
dataset: Amazon2014Baby_550811_RAU
tasktag: Matching

embedding_dim: 64
num_layers: 3
tfile: llama2_7b_title.pkl

epochs: 500
batch_size: 2048
optimizer: adam
lr: 5.e-4
weight_decay: 0.

tau: 0.25
num_negs: 256
projector: mlp

monitors: [LOSS, Recall@1, Recall@10, Recall@20, NDCG@10, NDCG@20]
which4best: NDCG@20
MethodR@1R@10R@20N@10N@20
LightGCN0.00370.02120.03570.01130.0151
LightGCN+InfoNCE0.00360.02060.03440.01110.0147
AlphaRec (Llama2-7B)0.00390.02430.03990.01280.0169
AlphaRec (Llama2-13B)0.00370.02420.03990.01260.0167
AlphaRec (MiniLM-L12-v2)0.00310.02290.03850.01170.0158

参考文献

  1. Sheng L., Zhang A., Zhang Y., Chen Y., Wang X., and Chua T. Language Representations Can be What Recommenders Need: Findings and Potentials ICLR, 2025. [PDF] [Code]