Researchers proposed a new method, ODE_t(ODE_l), to enhance sampling efficiency in continuous normalizing flows and diffusion models.
The method involves controlling the tradeoff between quality and complexity by adjusting time steps and the length of the neural network.
By using this approach, sampling can be done with varying time steps and transformer blocks, reducing latency and memory usage.
Experiments on image generation datasets demonstrated up to a 3x latency reduction and a 3.5-point FID score improvement compared to the previous state of the art.