news

Mar 25, 2025 :star: I will join Apple MLR for Research Internship and work on fundamental machine learning problem
Feb 28, 2025 :zap: DICE: Discrete Inversion Enabling Controllable Editing for Multinomial Diffusion and Masked Generative Models got accepted at CVPR 2025. This paper proposes editing technique for discrete diffusion model)
Jan 22, 2025 :zap: Improved Latent Consistency Model got accepted at ICLR 2025. This paper proposes series of novel techniques like Cauchy loss, OT coupling, adaptive robust scale scheduler and diff loss at early timestep to efficiently train latent consistency model from scatch. Our technique bridges the performance gap between LDM and LCM training. (this is the first work discovering the unstability of consistency model on latent space due to impulsive outlier.)
Dec 10, 2024 :zap: SCFlow got accepted at AAAI 2025. This is the first work attempting to distill flow matching model into one and few step generation. With SCFlow, we could achieve consistent one and few step generation, which means starting from a noise, no matter how many NFEs is used for sampling, the final generated image is indentical.
Sep 23, 2024 :zap: Yummy DimSUM got accepted at NeurIPS 2024. DimSUM proposes novel hybrid transformer-mamba architecture allowing faster convergence training of diffusion/flow matching model and also achieve SoTA image generation.
Jul 21, 2024 :zap: RDUOT got accepted at ECCV 2024. This paper combines UOT generative framework with diffusion noising to allow train fast-converged and robust generative framework.
Jul 13, 2023 :zap: Antidreambooth got accepted at ICCV 2023. AntiDreambooth adds small undistinguished noise to your images to break the malicous explotation of Dreambooth on your images.
Feb 26, 2023 :zap: My first paper Wavediff got accepted at CVPR 2023. Wavediff proposes the frequency-aware Unet architecture allowing fast converence training for DiffusionGAN framework.