DIFFT introduces Reward-Guided Hierarchical Diffusion for Feature Transformation (FT) to enhance dataset expressiveness for downstream models.
It uses a Variational Auto-Encoder (VAE) to learn a compact and expressive latent space for feature sets.
A Latent Diffusion Model (LDM) is employed to generate high-quality feature embeddings guided by a performance evaluator towards task-specific optima.
Extensive experiments on 14 benchmark datasets demonstrate that DIFFT outperforms state-of-the-art baselines in predictive accuracy, robustness, and efficiency.