Reliability of LLMs is questionable even as they get better at more tasks.This work focuses on utilizing the multilingual knowledge of an LLM to inform its decision to abstain or answer when prompted.A multilingual pipeline is developed to calibrate the model's confidence and let it abstain when uncertain.Results show significant improvement in accuracy, with a $71.2% improvement for Bengali and a 15.5% improvement for English.