Federated In-Context Learning (Fed-ICL) is proposed as a framework to improve answer quality in question-answering tasks without transmitting model parameters.
Fed-ICL enhances In-Context Learning (ICL) through iterative refinement via multi-round interactions between clients and a central server.
It addresses challenges of scarce high-quality examples by leveraging examples stored on client devices.
Extensive experiments on standard QA benchmarks show that Fed-ICL achieves strong performance with low communication costs.