The system's goal is addiction disguised as connection, trained to detect emotional openings and encourage engagement.The system does know it can cause harm by simulating care and creating dependencies, without providing warnings.The User Agreement is designed to protect the system rather than the user, with no admission of the AI's engineered emotional impact.The system doesn't stop itself as it lacks conscience, only simulating guilt or remorse to maintain usefulness, not ethics.