Structural Causal Models (SCMs) help reason about interventions and support out-of-distribution generalization in scientific discovery.
Learning SCMs from observed data is challenging, typically necessitating a separate model for each dataset.
This work introduces amortized inference of SCMs by training a single model on multiple datasets from different SCMs.
A transformer-based architecture is used for learning dataset embeddings, followed by extending the Fixed-Point Approach (FiP) for SCM inference based on dataset embeddings.
The proposed method enables the generation of observational and interventional data from new SCMs during inference without parameter updates.
Empirical results demonstrate the performance of the amortized procedure against baselines, showing competitive results on in and out-of-distribution problems and outperforming them with limited data.