Image-based saliency prediction has reached high performance levels on existing benchmarks.
Predicting fixations across multiple saliency datasets is still challenging due to dataset bias.
Models trained on one dataset show a significant performance drop of around 40% when applied to another dataset.
A novel architecture with dataset-specific parameters has been proposed to address the generalization gap, leading to a new state-of-the-art on MIT/Tuebingen Saliency Benchmark datasets.