Diffusion models

In this article I want to tell you about diffusion models, which is an actively developing approach to image generation. Recent research shows that this paradigm can generate images of quality on par with or even exceeding the one of the best GANs. Moreover, the design of such models allows them to surpass two main GANs’ weaknesses, i.e. mode collapsing and sensitivity to hyperparameter choice. However, the same design, that makes diffusion models so powerful, makes them considerably slower on inference.

intro

| Table taken from Aran Komatsuzaki’s blog post. |

Read more →

Normalizing flows in simple words

Suppose we have a sample of objects $X = \{x_i\}_{i=1}^n$ that come from an unknown distribution $p_x(x)$ and we want our model to learn this distribution. What do I mean by learning a distribution? There are many ways to define such task, but data scientists mostly settle for 2 things:

  1. learning to score the objects’ probability, i.e. learning the probability density function $p_x(x)$, and/or
  2. learning to sample from this unknown distribution, which implies the ability to sample new, unseen objects.

Does this description ring a bell? Yes, I’m talking precisely about generative models!

Read more →