Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    GatedRepoError
Message:      401 Client Error. (Request ID: Root=1-68b19ab1-2dc3b232285ba95e409f9138;eaacd110-98dd-4dae-bd74-b590f85b6099)

Cannot access gated repo for url https://huggingface.co/datasets/nrizwan/toxicity_begets_toxicity_conversational_chains/resolve/96df72355d1bc5fc8c002a790d31da51d5803b04/wav_conservatives/1001.wav.
Access to dataset nrizwan/toxicity_begets_toxicity_conversational_chains is restricted. You must have access to it and be authenticated to access it. Please log in.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
                  response.raise_for_status()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/nrizwan/toxicity_begets_toxicity_conversational_chains/resolve/96df72355d1bc5fc8c002a790d31da51d5803b04/wav_conservatives/1001.wav
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1586, in _prepare_split_single
                  writer.write(example, key)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 553, in write
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 631, in write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 646, in write_table
                  pa_table = embed_table_storage(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2248, in embed_table_storage
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2249, in <listcomp>
                  embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2124, in embed_array_storage
                  return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 279, in embed_storage
                  [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 280, in <listcomp>
                  (path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 310, in wrapper
                  return func(value) if value is not None else None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 276, in path_to_bytes
                  return f.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
                  out = read(*args, **kwargs)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
                  out = f_read(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
                  return f.read()
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
                  out = f_read(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1076, in read
                  hf_raise_for_status(self.response)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 426, in hf_raise_for_status
                  raise _format(GatedRepoError, message, response) from e
              huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-68b19ab1-766fb0330ac9b5da6cbafb39;14e2f72c-8ad0-48e6-94aa-07884cf43fdb)
              
              Cannot access gated repo for url https://huggingface.co/datasets/nrizwan/toxicity_begets_toxicity_conversational_chains/resolve/96df72355d1bc5fc8c002a790d31da51d5803b04/wav_conservatives/1001.wav.
              Access to dataset nrizwan/toxicity_begets_toxicity_conversational_chains is restricted. You must have access to it and be authenticated to access it. Please log in.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
                  response.raise_for_status()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/nrizwan/toxicity_begets_toxicity_conversational_chains/resolve/96df72355d1bc5fc8c002a790d31da51d5803b04/wav_conservatives/1001.wav
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1595, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 658, in finalize
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 631, in write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 646, in write_table
                  pa_table = embed_table_storage(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2248, in embed_table_storage
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2249, in <listcomp>
                  embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2124, in embed_array_storage
                  return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 279, in embed_storage
                  [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 280, in <listcomp>
                  (path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 310, in wrapper
                  return func(value) if value is not None else None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 276, in path_to_bytes
                  return f.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
                  out = read(*args, **kwargs)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
                  out = f_read(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
                  return f.read()
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
                  out = f_read(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1076, in read
                  hf_raise_for_status(self.response)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 426, in hf_raise_for_status
                  raise _format(GatedRepoError, message, response) from e
              huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-68b19ab1-2dc3b232285ba95e409f9138;eaacd110-98dd-4dae-bd74-b590f85b6099)
              
              Cannot access gated repo for url https://huggingface.co/datasets/nrizwan/toxicity_begets_toxicity_conversational_chains/resolve/96df72355d1bc5fc8c002a790d31da51d5803b04/wav_conservatives/1001.wav.
              Access to dataset nrizwan/toxicity_begets_toxicity_conversational_chains is restricted. You must have access to it and be authenticated to access it. Please log in.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1451, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1447, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1604, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

audio
audio
label
class label
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
0wav_conservatives
End of preview.

Toxicity Begets Toxicity: Unraveling Conversational Chains in Political Podcasts

Accepted at ACM Multimedia 2025

Naquee Rizwan, Nayandeep Deb, Sarthak Roy, Vishwajeet Singh Solanki, Kiran Garimella, Animesh Mukherjee

[Paper] || [Arxiv] (Main content + Appendix in one PDF) || Please also follow the [GitHub] link for codes.


Abstract

Tackling toxic behavior in digital communication continues to be a pressing concern for both academics and industry professionals. While significant research has explored toxicity on platforms like social networks and discussion boards, podcasts—despite their rapid rise in popularity—remain relatively understudied in this context. This work seeks to fill that gap by curating a dataset of political podcast transcripts and analyzing them with a focus on conversational structure. Specifically, we investigate how toxicity surfaces and intensifies through sequences of replies within these dialogues, shedding light on the organic patterns by which harmful language can escalate across conversational turns. Warning: Contains potentially abusive/toxic contents.


Dataset

The top 100 toxic conversation chains and their ground truth cpd annotations, each for conservative and liberal podcast channels are present in the GitHub repository [cpd/dataset]. That folder contains:

  • two annotation csv files (one each for conservatives and liberals) containing annotations of individual annotators (ex: Annotator_ND) and based on the majority voting as well (refer 'Inter_Annotator'). Further, this file also contains the cpd results as predicted by traditional CPD algorithms (refer [ruptures] library).
  • two json files (one each for conservatives and liberals) containing the details of top 100 toxic conversation chains.

Hugging Face

Additionally, here we also provide this [Hugging Face] dataset with:

  • audio clips (.wav files) of top 100 toxic conversation chains (for both conservatives and liberals). These files are required to run the audio prompts in [cpd/dataset/audio_prompt_cpd.py]. Note- Please accordingly update the path to folders to make the code working.
  • all toxic conversation chains from both, conservative and liberal podcast channels. As stated in the paper, we define a toxic conversation chain whose anchor segment's toxicity value is greater than 0.7.
  • complete diarized dataset with toxicity scores calculated using Perspective API for both conservative and liberal podcast channels.

Appendix

ACM MM 2025 did not have the provision of incorporating supplementary material. Hence, we provide it [here].


Please cite our paper

@inproceedings{10.1145/3746027.3754553,
author = {Rizwan, Naquee and Deb, Nayandeep and Roy, Sarthak and Solanki, Vishwajeet Singh and Garimella, Kiran and Mukherjee, Animesh},
title = {Toxicity Begets Toxicity: Unraveling Conversational Chains in Political Podcasts},
year = {2025},
isbn = {9798400720352},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3746027.3754553},
doi = {10.1145/3746027.3754553},
abstract = {Tackling toxic behavior in digital communication continues to be a pressing concern for both academics and industry professionals. While significant research has explored toxicity on platforms like social networks and discussion boards, podcasts-despite their rapid rise in popularity-remain relatively understudied in this context. This work seeks to fill that gap by curating a dataset of political podcast transcripts and analyzing them with a focus on conversational structure. Specifically, we investigate how toxicity surfaces and intensifies through sequences of replies within these dialogues, shedding light on the organic patterns by which harmful language can escalate across conversational turns.  Warning: Contains potentially abusive/toxic contents.},
booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
pages = {11776–11784},
numpages = {9},
keywords = {change point detection, podcasts, toxic conversation chains, toxicity begets toxicity, transcripts},
location = {Dublin, Ireland},
series = {MM '25}
}
@inproceedings{Rizwan_2025, series={MM ’25},
   title={Toxicity Begets Toxicity: Unraveling Conversational Chains in Political Podcasts},
   url={http://dx.doi.org/10.1145/3746027.3754553},
   DOI={10.1145/3746027.3754553},
   booktitle={Proceedings of the 33rd ACM International Conference on Multimedia},
   publisher={ACM},
   author={Rizwan, Naquee and Deb, Nayandeep and Roy, Sarthak and Solanki, Vishwajeet Singh and Garimella, Kiran and Mukherjee, Animesh},
   year={2025},
   month=oct, pages={11776–11784},
   collection={MM ’25} }

Contact

For any questions or issues, please contact: [email protected]

Downloads last month
50