Input Data en Crowdsourcing <span>Crowdsourcing</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 10/25/2019 - 13:38</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:&nbsp;</strong></p> <p>The system has insufficient or low confidence in the data, it relies on explicit user inputs to make effective, real-time updates to the AI predictions. The user doesn’t want to put a lot of effort into this.</p> <figure><img alt="The Trainline app asks for user input to accurately show seat availability on board." data-entity-type="file" data-entity-uuid="78eb1718-bd6d-405a-a3ae-759afbdfb3c1" src="/sites/default/files/content-images/SmartPattern.gif" style="max-width:350px;width:100%" /> <figcaption>The Trainline app uses crowdsourcing to help make predictions around seat availability, keeping the interactions light so users are more inclined to participate.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides easy input options for users to contribute back (by choice) to the functioning of the AI and highlights the reasons for doing so.</p> <p><strong>Discussion:</strong></p> <p>Many algorithms rely on ongoing user updates to the information they work from in order to be effective. While some do this in the background (think Google Maps or Waze and their dependence on regular traffic updates, largely gleaned from the relative speed of drivers over a distance, to accurately predict travel times), other algorithms may need users to actively give feedback. By making their input both easy to offer and the reasons for doing so clear, the AI systems can bring the user into the process to ensure the outcomes are useful for all users.</p> <p>&nbsp;</p> <p><em>Pattern submission via <a href="">Jan Srutek</a></em><br /> &nbsp;</p> </div> Fri, 25 Oct 2019 13:38:11 +0000 leighbryant 166 at Explaining Reductive Inputs <span>Explaining Reductive Inputs</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 18:09</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user wants the&nbsp;system to respect their identity and not force them to conform to an identity that doesn't match their own simply to use the app.&nbsp;</p> <figure><img alt="Example from Ada of a binary gender input requirement and follow-on explanation of why the limits are in place" data-entity-type="file" data-entity-uuid="ff12460a-b778-4432-a078-bf233aa5ba9c" src="/sites/default/files/content-images/Explaining_Reductive_Inputs-ada.png" /> <figcaption>By providing an explanation of why it requires a binary response, this health app acknowledges the identity of the individuals using it even when it has to limit them within the application.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>If the system does ask the user to provide a binary choice around identity—e.g. if a health app asks a user to choose between male and female—then the system explains why it necessary to reduce the options to only these choices, and qualifies what the implication of each is.</p> <p><br /> <strong>Discussion:</strong></p> <p>While ideally any app allows for the user to choose any gender, medical diagnosis normally requires a choice between binary male or female. The UI is not really the best place to discuss issues around the relationship between gender and sex and the social construction of both. Explaining the necessity of asking a reductive question in such a way as to make it clear that the system empathizes with the users concerns is tricky, yet it must try to do its best.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 18:09:12 +0000 leighbryant 141 at Dark Pattern: Stealth Training <span>Dark Pattern: Stealth Training </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 17:15</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user wants to know when inputs they provide are being used to complete their intended task(s) and when they're being used to train the system.</p> <figure><img alt="Example of stealth data collection from webscore" data-entity-type="file" data-entity-uuid="54c34f5c-3d08-42b1-a608-77eacc1e82f7" src="/sites/default/files/content-images/Stealth_Training-webscoreai_0.png" /> <figcaption>In this example, there is no explanation of what the user input will do after it is provided. Will it help to train the system?</figcaption> </figure> <p><strong>Dark pattern response:</strong></p> <p>The system collects data from the user for training purposes, but either does not declare what use the data is being put to or misleads the user to suggest that it is being used for task completion.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Understanding what the system is doing at any point is vital for building trust in the system. Beyond that, if the user knows that data is used for training and not direct task completion, they may choose not to provide that data. To lead the user to believe something which is not the case, through deliberately misleading language or simple lies of omission, is to deploy a coercive dark pattern, which should be avoided.&nbsp;<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 17:15:18 +0000 leighbryant 121 at Confirm Configuration <span>Confirm Configuration </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 17:04</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system requires the user to provide configuration details in order to work effectively. The user does not want to invest effort into this process.&nbsp;</p> <figure><img alt="Two examples of confirmation messaging prompts in a chatbot conversation flow" data-entity-type="file" data-entity-uuid="d2b20a93-cd06-42c3-8eaf-32188e9443d3" src="/sites/default/files/content-images/Confirm_Configuration.png" /> <figcaption>In these two examples, the system uses AI to offer suggestions and prompts to set its functioning so the user can continue with a minimum of input effort.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Rather than asking for all configuration details upfront, the system starts running on minimal input and sensible defaults, and organically prompts the user to provide more configuration details over time. This can be in the form of confirming defaults, e.g. "It looks like 3:32pm where you are. Is that correct?" or it can be asking for input on unknown variables.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A key difference for many AI-driven apps compared to non-AI is that user experience will evolve over time, as the user becomes more comfortable using the system and as the system gains in accuracy and capability. With this in mind as designers, the focus should be less on fully configuring the system upfront, and more on how it can become increasingly customized to the user's requirements in a staged process. This is especially relevant for a chatbot type app, where configuration questions can slide naturally into other conversations.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 17:04:02 +0000 leighbryant 91 at Qualitative Feedback for Training <span>Qualitative Feedback for Training</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:56</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>When a system being trained by a user is not producing adequate outputs, more information is required to understand what's not working as expected or desired. The user does not want to invest a lot of effort in this process.</p> <figure><img alt="An example of a low-effort feedback option with prompts (multiple choice inputs and open text entry field) in a user app" data-entity-type="file" data-entity-uuid="98ead399-30da-4b54-b67c-e6e2c0bb264e" src="/sites/default/files/content-images/Qualitative_Feedback_For_Training-ada.png" /> <figcaption>In the two images above, a user is given the option to provide feedback with the simple push of a button, and then the application makes feedback easy by auto-generating some options as well as providing an open text field for more information if desired.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Triggered by a negative response or rating, the system prompts the user to provide qualitative feedback in the form of an explanation of what is unsatisfactory. This may be free text input or selection from preset options, and is usually done in combination with <a href="/patterns/81/quantitative-feedback-training">Quantitative Feedback for Training</a>.</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>As the effort required here is relatively high for the user with no immediate reward, care should be taken to motivate the user accordingly. This could be the system advising that this feedback will help the system improve and thus establish the benefit to the user. Or the motivation could simply be a pleasant and rewarding interaction, e.g. an interesting conversation point with a chatbot.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 16:56:34 +0000 leighbryant 86 at Quantitative Feedback for Training <span>Quantitative Feedback for Training </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:55</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system needs to be trained by the user. The user does not want to invest a lot of effort in this process.</p> <figure><img alt="Example of a low-effort quantitive feedback option (thumbs up or thumbs down) included in a conversational flow with a chatbot" data-entity-type="file" data-entity-uuid="3b776fce-3e85-4264-b97b-2ceb3832121a" src="/sites/default/files/content-images/Quantitative_Feedback_For_Training-Daisy.png" /> <figcaption>By prompting for a quick thumbs up or thumbs down, the system can get quick and easy feedback from the user to train itself with.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>When providing an output, the system prompts the user to rate the quality of that output. This could be in the form of a simple thumbs up / down, or require a little more granular feedback in terms of a star rating or other score.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Whether this is fed directly back into training the algorithm or is captured for analysis outside the system will depend on the specifics of the implementation. If the feedback is being captured for analysis, then it makes sense to prompt the user to also provide <a href="/patterns/86/qualitative-feedback-training">Qualitative Feedback for Training</a>, with that feedback mechanism triggered by a negative quantitative feedback.</p> </div> Wed, 21 Aug 2019 16:55:35 +0000 leighbryant 81 at Privacy Reassurance <span>Privacy Reassurance </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:26</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know what happens to their&nbsp;data when it's&nbsp;captured by the system.</p> <figure><img alt="Example of clear privacy and data sharing practices in the Capturebot AI function from Microsoft" data-entity-type="file" data-entity-uuid="9a6bac43-f02b-462b-bf82-d71dec30715e" src="/sites/default/files/content-images/Privacy_Reassurances-capturebot_1.png" /> <figcaption>Microsoft's AI-powered caption bot has an explanatory box about what happens to the image after uploading outside of the primary captioning function.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system communicates to the user what data it’s capturing, how that data is stored,&nbsp;how long it's stored for, and what other systems the data is being communicated to.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>This is especially important when it comes to data captured based on user behaviour rather than explicit input. Ideally the system captures the least amount of data to achieve its task and stores it for no longer than is necessary, but sometimes data needs to be retained (e.g. for that user to maintain a profile&nbsp;or to train the system—or in some cases, another system—in general). In these cases, a key aspect of transparency is understanding the mechanics of this data capture and storage.&nbsp;</p> <p dir="ltr">&nbsp;</p> <p dir="ltr"><b>Other Examples:</b></p> <figure><img alt="Example of privacy reassurance following a data deletion in an application" data-entity-type="file" data-entity-uuid="67e7e3a3-c4c2-4528-a95f-c0ef92a9324e" src="/sites/default/files/content-images/Privacy_Reassurances-wysa.png" /> <figcaption>In this example, the user is given valuable insight into what happens to their data after a delete.</figcaption> </figure> <p dir="ltr">&nbsp;</p> <p dir="ltr">&nbsp;</p> </div> Wed, 21 Aug 2019 16:26:34 +0000 leighbryant 36 at