This work reformulates recommendation as a multitask Markov Decision Process, where each task represents a set of similar users and finds that a task-specific policy is more effective than a single universal policy for all users.
Deep reinforcement learning (DRL) based recommender systems are suitable for user cold-start problems as they can capture user preferences progressively. However, most existing DRL-based recommender systems are suboptimal, since they use the same policy to suit the dynamics of different users. We reformulate recommendation as a multitask Markov Decision Process, where each task represents a set of similar users. Since similar users have closer dynamics, a task-specific policy is more effective than a single universal policy for all users. To make recommendations for cold-start users, we use a default policy to collect some initial interactions to identify the user task, after which a task-specific policy is employed. We use Q-learning to optimize our framework and consider the task uncertainty by the mutual information regarding tasks. Experiments are conducted on three real-world datasets to verify the effectiveness of our proposed framework.