T-CIL is a novel temperature scaling approach for class-incremental learning without a validation set for old tasks.It leverages adversarially perturbed exemplars from memory to improve model confidence calibration.The key idea of T-CIL is to perturb exemplars more strongly for old tasks than for the new task based on feature distance.T-CIL outperforms various baselines in terms of calibration and can be integrated with existing class-incremental learning techniques.