A data-driven framework called Dynamics-Guided Diffusion Model (DGDM) has been developed for generating task-specific manipulator designs without task-specific training.
DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion.
The framework represents manipulation tasks as interaction profiles and the design space using a geometric diffusion model.
DGDM outperforms optimization-based and unguided diffusion baselines in terms of the average success rate.