Datasets:

Modalities:
Tabular
Text
Formats:
csv
Libraries:
Datasets
pandas
License:
ClueWeb-Reco / README.md
SingularityHJY's picture
Update README.md
c0a61bd verified
metadata
configs:
  - config_name: input
    data_files:
      - split: valid
        path:
          - interaction_splits/valid_inter_input.tsv
      - split: test
        path:
          - interaction_splits/test_inter_input.tsv
    default: true
  - config_name: target
    data_files:
      - split: valid
        path:
          - interaction_splits/valid_inter_target.tsv
  - config_name: mapping
    data_files: cwid_to_id.tsv
license: mit

ClueWeb-Reco:

ClueWeb-Reco as Webpage Recommendation Hidden Test

ClueWeb-Reco is a dataset constructed by mapping real-life U.S. browsing history to publicly available websites in the English subset of ClueWeb22-B dataset. Its synthetic nature ensures strong privacy guarantee while it closely mirrors real user interactions and reveals real-life performance of recommder systems.

ClueWeb-Reco serves as the hidden test set of the ORBIT benchmark. The task simulates real-world user browsing behavior, where a model is given a sequence of user interactions in terms of ClueWeb page IDs, and must predict the next item a user will engage with from the candidate pool of ClueWeb22-B English subset.

ClueWeb-Reco follows sequential leave-one-out splitting method and is splitted into validation and test set. That is, or each sequence grouped by user or sessions with length n, the first n − 2 items are used as user history input to for validation to predict the validation target, which is the (n − 1)th item. The nth item is reserved as the target for the test set while the previous n − 1 items are given as test input.

We provide input and ground truth for the validation set. The target of the test set is hidden to avoid possible data leakage and ensure the effectiveness and integrity of the benchmark.

Source Files

-- cwid_to_id.tsv: mapping bewteen official ClueWeb22 docids of ClueWeb-Reco's candidate pool to ClueWeb-Reco internal docids. All the IDs in the below files are represented as ClueWeb-Reco internal docids.

Splits in pure interaction format

  • interaction_splits:
    • valid_inter_input.tsv: input for validation dataset
    • valid_inter_target.tsv: validation dataset ground truth
    • test_inter_input.tsv: input for testing dataset (ground truth hidden)

Splits in ordered ClueWeb id list format

  • ordered_id_splits:
    • valid_input.tsv: input for validation dataset
    • valid_target.tsv: validation dataset ground truth
    • test_input.tsv: input for testing dataset (ground truth hidden)

Utility files for ClueWebApi usage and example processing on the ordered ClueWeb id list format

  • cw_data_processing:
    • ClueWeb22Api.py: API to retrieve ClueWeb document information from official ClueWeb22 docids
    • example_dataset.py: example to load input data sequences with ClueWeb22Api

Note

The ClueWeb-Reco dataset was collected, stored, released, and is maintained by our team at Carnegie Mellon University.