In Continual Learning (CL), there has been limited research on Multi-Label Learning (MLL).MLL datasets are often class-imbalanced, making it challenging in CL.To optimize Macro-AUC in MLCL, a new memory replay-based method called RLDAM is proposed.Experimental results demonstrate the effectiveness of the proposed method over baselines.