The user wants to know when inputs they provide are being used to complete their intended task(s) and when they're being used to train the system.
Dark pattern response:
The system collects data from the user for training purposes, but either does not declare what use the data is being put to or misleads the user to suggest that it is being used for task completion.
Understanding what the system is doing at any point is vital for building trust in the system. Beyond that, if the user knows that data is used for training and not direct task completion, they may choose not to provide that data. To lead the user to believe something which is not the case, through deliberately misleading language or simple lies of omission, is to deploy a coercive dark pattern, which should be avoided.