Loading AI tools
Deep generative model From Wikipedia, the free encyclopedia
The Latent Diffusion Model (LDM)[1] is a diffusion model architecture developed by the CompVis (Computer Vision & Learning)[2] group at LMU Munich.[3]
Original author(s) | CompVis |
---|---|
Initial release | December 20, 2021 |
Repository | https://github.com/CompVis/latent-diffusion |
Written in | Python |
Type | |
License | MIT |
Introduced in 2015, diffusion models (DMs) are trained with the objective of removing successive applications of noise (commonly Gaussian) on training images. The LDM is an improvement on standard DM by performing diffusion modeling in a latent space, and by allowing self-attention and cross-attention conditioning.
LDMs are widely used in practical diffusion models. For instance, Stable Diffusion versions 1.1 to 2.1 were based on the LDM architecture.[4]
Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion.[5] It was accompanied by a software implementation in Theano.[6]
A 2019 paper proposed the noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).[7] The paper was accompanied by a software package written in PyTorch release on GitHub.[8]
A 2020 paper[9] proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference. The paper was accompanied by a software package written in TensorFlow release on GitHub.[10] It was reimplemented in PyTorch by lucidrains.[11][12]
On December 20, 2021, the LDM paper was published on arXiv,[13] and both Stable Diffusion[14] and LDM[15] repositories were published on GitHub. However, they remained roughly the same. Substantial information concerning Stable Diffusion v1 was only added to GitHub on August 10, 2022.[16]
All of Stable Diffusion (SD) versions 1.1 to XL were particular instantiations of the LDM architecture.
SD 1.1 to 1.4 were released by CompVis in August 2022. There is no "version 1.0". SD 1.1 was a LDM trained on the laion2B-en dataset. SD 1.1 was finetuned to 1.2 on more aesthetic images. SD 1.2 was finetuned to 1.3, 1.4 and 1.5, with 10% of text-conditioning dropped, to improve classifier-free guidance.[17][18] SD 1.5 was released by RunwayML in October 2022.[18]
While the LDM can work for generating arbitrary data conditional on arbitrary data, for concreteness, we describe its operation in conditional text-to-image generation.
LDM consists of a variational autoencoder (VAE), a modified U-Net, and a text encoder.
The VAE encoder compresses the image from pixel space to a smaller dimensional latent space, capturing a more fundamental semantic meaning of the image. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet backbone, denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space.[4]
The denoising step can be conditioned on a string of text, an image, or another modality. The encoded conditioning data is exposed to denoising U-Nets via a cross-attention mechanism.[4] For conditioning on text, the fixed, a pretrained CLIP ViT-L/14 text encoder is used to transform text prompts to an embedding space.[3]
To compress the image data, a variational autoencoder (VAE) is first trained on a dataset of images. The encoder part of the VAE takes an image as input and outputs a lower-dimensional latent representation of the image. This latent representation is then used as input to the U-Net. Once the model is trained, the encoder is used to encode images into latent representations, and the decoder is used to decode latent representations back into images.
Let the encoder and the decoder of the VAE be .
To encode an RGB image, its three channels are divided by the maximum value, resulting in a tensor of shape with all entries within range . The encoded vector is , with shape , where 0.18215 is a hyperparameter, which the original authors picked to roughly whiten the encoded vector to roughly unit variance. Conversely, given a latent tensor , the decoded image is , then clipped to the range .[19][20]
In the implemented version,[3]: ldm/models/autoencoder.py the encoder is a convolutional neural network (CNN) with a single self-attention mechanism near the end. It takes a tensor of shape and outputs a tensor of shape , being the concatenation of the predicted mean and variance of the latent vector. The variance is used in training, but after training, usually only the mean is taken, with the variance discarded.
The decoder is a CNN also with a single self-attention mechanism near the end. It takes a tensor of shape and outputs a tensor of shape .
The U-Net backbone takes the following kinds of inputs:
Each run through the UNet backbone produces a predicted noise vector. This noise vector is scaled down and subtracted away from the latent image array, resulting in a slightly less noisy latent image. The denoising is repeated according to a denoising schedule ("noise schedule"), and the output of the last step is processed by the VAE decoder into a finished image.
Similar to the standard U-Net, the U-Net backbone used in the SD 1.5 is essentially composed of down-scaling layers followed by up-scaling layers. However, the UNet backbone has additional modules to allow for it to handle the embedding. As an illustration, we describe a single down-scaling layer in the backbone:
ResBlock
:
SpatialTransformer
, which is essentially a standard pre-LN Transformer decoder without causal masking.
In pseudocode,
def ResBlock(x, time, residual_channels):
x_in = x
time_embedding = feedforward_network(time)
x = concatenate(x, residual_channels)
x = conv_layer_1(activate(normalize_1(x))) + time_embedding
x = conv_layer_2(dropout(activate(normalize_2(x))))
return x_in + x
def SpatialTransformer(x, cond):
x_in = x
x = normalize(x)
x = proj_in(x)
x = cross_attention(x, cond)
x = proj_out(x)
return x_in + x
def unet(x, time, cond):
residual_channels = []
for resblock, spatialtransformer in downscaling_layers:
x = resblock(x, time)
residual_channels.append(x)
x = spatialtransformer(x, cond)
x = middle_layer.resblock_1(x, time)
x = middle_layer.spatialtransformer(x, time)
x = middle_layer.resblock_2(x, time)
for resblock, spatialtransformer in upscaling_layers:
residual = residual_channels.pop()
x = resblock(concatenate(x, residual), time)
x = spatialtransformer(x, cond)
return x
The LDM is trained by using a Markov chain to gradually add noise to the training images. The model is then trained to reverse this process, starting with a noisy image and gradually removing the noise until it recovers the original image. More specifically, the training process can be described as follows:
The model is trained to minimize the difference between the predicted noise and the actual noise added at each step. This is typically done using a mean squared error (MSE) loss function.
Once the model is trained, it can be used to generate new images by simply running the reverse diffusion process starting from a random noise sample. The model gradually removes the noise from the sample, guided by the learned noise distribution, until it generates a final image.
See the diffusion model page for details.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.