Papers
arxiv:2510.19304

Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling Wall

Published on Oct 22
· Submitted by mingyu jo on Oct 24
Authors:
,
,

Abstract

Loopholing Discrete Diffusion Models (LDDMs) enhance text generation by preserving distributional information through a deterministic latent pathway, reducing perplexity and improving coherence and performance on reasoning tasks.

AI-generated summary

Discrete diffusion models offer a promising alternative to autoregressive generation through parallel decoding, but they suffer from a sampling wall: once categorical sampling occurs, rich distributional information collapses into one-hot vectors and cannot be propagated across steps, forcing subsequent steps to operate with limited information. To mitigate this problem, we introduce Loopholing, a novel and simple mechanism that preserves this information via a deterministic latent pathway, leading to Loopholing Discrete Diffusion Models (LDDMs). Trained efficiently with a self-conditioning strategy, LDDMs achieve substantial gains-reducing generative perplexity by up to 61% over prior baselines, closing (and in some cases surpassing) the gap with autoregressive models, and producing more coherent text. Applied to reasoning tasks, LDDMs also improve performance on arithmetic benchmarks such as Countdown and Game of 24. These results also indicate that loopholing mitigates idle steps and oscillations, providing a scalable path toward high-quality non-autoregressive text generation.

Community

Paper author Paper submitter

In discrete diffusion, the predicted distribution—which encodes plausible candidates and their relative likelihoods—often collapses into one-hot tokens. This forces generation to depend on one-hot tokens that carry limited information. To address this, we introduce loopholing, a mechanism that deterministically carries a latent across steps and reduces reliance on sampling. This simple change yields more natural, coherent text and better performance on reasoning tasks.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.19304 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.19304 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.19304 in a Space README.md to link it from this page.

Collections including this paper 1