Machine learning systems often assume training and test data come from the same distribution, but this is rarely the case in real-world scenarios where data conditions may change.
Adapting unsupervised domains with minimal access to new data is crucial for building models robust to distribution changes.
This study explores optimal transport between Gaussian Mixture Models (GMMs) for analyzing distribution changes efficiently, showing promising results in various benchmarks.
The proposed method is more efficient and scalable compared to previous shallow domain adaptation methods, performing well with varying sample sizes and dimensions.