Skip to main content

Informed Decisions



Users choose not to activate a system feature or not to provide some data input which may impact the accuracy, efficiency, or effectiveness of the system. They want to be warned when this will have an effect.

Example from Ritual of providing user with clear changes to outcomes when a portion of the AI is disabled
The application shown above explains what happens if a user limits the AI functions around location-based services and how it will hinder some of the outputs.


The system warns the user that deactivating a system action, dismissing data, or failing to provide data may have an effect on the outcomes of some actions.


Of course, we can design the system to warn the user that it needs data for accuracy when in fact we would like to (also) capture the data for other purposes like customer analysis or marketing leads, so this pattern is open to misuse if used in bad faith. Many patterns like this rely on honesty and appropriate use— lies of omission can easily turn otherwise beneficial patterns into coercive dark patterns.