Diffusers documentation
AutoencoderKLHunyuanImage
AutoencoderKLHunyuanImage
The 2D variational autoencoder (VAE) model with KL loss used in [HunyuanImage2.1].
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLHunyuanImage
vae = AutoencoderKLHunyuanImage.from_pretrained("hunyuanvideo-community/HunyuanImage-2.1-Diffusers", subfolder="vae", torch_dtype=torch.bfloat16)AutoencoderKLHunyuanImage
class diffusers.AutoencoderKLHunyuanImage
< source >( in_channels: int out_channels: int latent_channels: int block_out_channels: typing.Tuple[int, ...] layers_per_block: int spatial_compression_ratio: int sample_size: int scaling_factor: float = None downsample_match_channel: bool = True upsample_match_channel: bool = True )
A VAE model for 2D images with spatial tiling support.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_tiling
< source >( tile_sample_min_size: typing.Optional[int] = None tile_overlap_factor: typing.Optional[float] = None )
Enable spatial tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
forward
< source >( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )
tiled_decode
< source >( z: Tensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple
Parameters
- z (
torch.Tensor) — Latent tensor of shape (B, C, H, W). - return_dict (
bool, optional, defaults toTrue) — Whether or not to return a~models.vae.DecoderOutputinstead of a plain tuple.
Returns
~models.vae.DecoderOutput or tuple
If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is
returned.
Decode latent using spatial tiling strategy.
tiled_encode
< source >( x: Tensor ) → torch.Tensor
Encode input using spatial tiling strategy.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.