RecoReact / README.md
tosch22's picture
Update README.md
ce76d0c verified
metadata
license: cdla-permissive-2.0
task_categories:
  - text-ranking
  - text-classification
language:
  - en
tags:
  - food
  - news
  - travel
  - recommendations

Overview

RecoReact is a novel dataset of multi-turn interactions between real users and a (random) AI assistant, providing a unique set of feedback signals, including multi-turn natural language requests, structured item selections, item ratings, and user profiles. The dataset spans three different domains: news articles, travel destinations, and meal planning.

Dataset Details

This dataset contains 1785 interactions from 595 users (with a median completion time of about 22 min). RecoReact also includes the user profile data of each user (fully anonymized), and information on all items in each domain including a title, high-level category, description, and URL to a thumbnail image. This dataset also contains various types of feedback signals besides natural language: the set of selected items, survey responses. Other information:

  • Curated by: Felix Leeb and Tobias Schnabel
  • Funded by: Microsoft Research (Augmented Learning and Reasoning team)
  • Language(s) (NLP): English
  • License: CDLA Permissive, Version 2.0

User and Item Features

The dataset broadly contains three types of user feedback over three user-assistant turns:

  • User Profiles: Structured responses to intake questions on interests, goals, and habits.
  • Behavioral History: Items selected in each turn
  • Satisfaction: Ratings of how satisfied or relevant the entire set of recommendations was in each turn
  • Natural Language Messages: The user's initial request and requested modifications/feedback in each turn

Uses

The dataset is intended for research on personalized recommendation systems, in particular to study how different types of feedback signals (ratings, written feedback, user profiles) can be used to increase the relevance of recommendations.

Dataset Structure

The dataset is split into three different files for each of the three domains: news, travel, and food for a total of 9 files. For each domain, there is one file containing all the information about the items that can be recommended products.csv, one file containing all the user information users.csv, and one file containing the interactions between users and the assistant impressions.csv (including the user requests and selections).

The [domain]-impressions.jsonl contains three impressions (one for each round) per user:

  • iid: a unique identifier for the interaction
  • uid: the user identifier
  • round: the round number of the interaction (1-3)
  • categories: a list of categories the user selected for the recommendations
  • request: the user's request to the assistant
  • selected1, selected2, selected3: the user's selection of the recommendations
  • update1, update2: the user's update to the recommendations
  • rating1, rating2, rating3: 9-point scale ratings of the recommendations
  • summary: a free-response summary of the user's feedback on the recommendations
  • rating_summary: 9-point scale rating of the summary
  • good_suggestions: a 5-point scale rating of how good the suggestions were
  • good_selections: a 5-point scale rating of how good the selections were
  • good_request_match: a 5-point scale rating of how well the recommendations matched the request
  • good_summary_match: a 5-point scale rating of how well the summary matched the feedback

The [domain]-products.jsonl file lists all items in a given domain:

  • pid: a unique identifier for the item
  • title: the title of the item (shown to the user)
  • description: a longer description of the item (shown to the user)
  • thumbnail: a URL to a thumbnail image of the item (shown to the user)
  • category: a category label for the item (not shown to the user)

The [domain]-users.jsonl contains all static information about each user:

  • uid: a unique identifier for the user
  • source_types: a list of categories selected by the user
  • desc_sources: a free-response description of how the user engages with the domain
  • personalized: a 5-point scale rating of how much the user values personalized recommendations
  • explore: a 5-point scale rating of how much the user values exploring new places
  • desc_selection: a free-response description of the user's selection process
  • task_clear: a 5-point scale rating of how clear the user found the task
  • task_difficult: a 5-point scale rating of how difficult the user found the task
  • dialogue_helpful: a 5-point scale rating of how helpful the user found the dialogue with the assistant
  • recs_satisfied: a 5-point scale rating of how satisfied the user was with the recommendations
  • task_feedback: a free-response description of the user's feedback on the task
  • Travel-specific fields:
    • companions: a list of companions the user prefers to travel with
  • Food-specific fields:
    • companions: a list of companions the user eats with or cooks for
  • News-specific fields:
    • frequency: a list of categories the user selected for the recommendations
    • reasons: a free-response description of why the user reads news
    • demand: a 5-point scale rating of how much the user would like to use a personalized AI assistant for news

Dataset Creation

The data was collected using a web-survey with participants recruited through Prolific. The participants were asked to provide general information about their habits and preferences in one of the three domains, then they were asked to make a specific request to the assistant, and provide feedback on the recommendations they received. We use a hybrid interface with both free-text and structured input, for example, in item selection, users choose from a visual layout of 12 items with thumbnails. We choose 12 items without replacement in each round, drawn uniformely at random from the general inventory filtered down to the main interest categories selected in the profile building process in order to ensure both unbiasedness as well as a baseline level of relevance.

Curation Rationale

We created this dataset because there were no public recommendation datasets that included both conventional feedback signals (ratings, selection) as well as natural language feedback and profile information.

Dataset Sources

For each domain, we created an inventory:

Bias, Risks, and Limitations

Participants in the survey were recruited using Prolific, with the only requirement being that they were fluent in English, and their residence was in the US. This may introduce biases in the dataset, as the participants may not be representative of the general population. Additionally only a small number (approximately 600 participants) were recruited for the survey, which may limit the generalizability of the dataset.

Citation

BibTeX:

@misc{recoreact2025arxiv,
      title={Do as I say, not just as I do: Understanding Feedback for Recommendation Systems}, 
      author={Felix Leeb and Tobias Schnabel},
      year={2025},
      eprint={TBD},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={TBD}, 
}