Welcome to the ADMT Publication Server

A Ranked Bandit Approach for Multi-stakeholder Recommender Systems

DocUID: 2022-008 Full Text: PDF

Author: Tahereh Arabghalizi, Alexandros Labrinidis

Abstract: Recommender systems traditionally find the most relevant products or services for users tailored to their needs or interests but they ignore the interests of the other sides of the market (aka stakeholders). In this paper, we propose to use a Ranked Bandit approach for an online multi-stakeholder recommender system that sequentially selects top š‘˜ items according to the relevance and priority of all the involved stakeholders. We presented three different criteria to consider the priority of each stakeholder when evaluating our approach. Our extensive experimental results on a movie dataset showed that the contextual multi-armed bandits with a relevance function make a higher level of satisfaction for all involved stakeholders in the long term. Keywords: Multi-stakeholder Recommender Systems; Multi-armed Bandits; Ranked Bandit;

Keywords: Multi-stakeholder Recommender Systems; Multi-armed Bandits; Ranked Bandit;

Published In: Workshop of Multi-Objective Recommender Systems (MORSā€™22), in conjunction with the 16th ACM Conference on Recommender Systems, RecSys

Pages: 22

Year Published: 2022

Project: PittSmartLiving Subject Area: Multi-armed Bandits, Recommender Systems

Publication Type: Workshop Paper

Sponsor: NSF CNS-1739413

Citation:Text Latex BibTex XML Tahereh Arabghalizi, and Alexandros Labrinidis. A Ranked Bandit Approach for Multi-stakeholder Recommender Systems. Workshop of Multi-Objective Recommender Systems (MORSā€™22), in conjunction with the 16th ACM Conference on Recommender Systems, RecSys. 22. 2022.