• Part of
    Ubiquity Network logo
    Publish with us Cyhoeddi gyda ni

    Read Chapter
  • No readable formats available
  • You should not control what you do not understand: the risks of controllability in AI

    Gabriel Diniz Junqueira Barbosa, Simone Diniz Junqueira Barbosa

    Chapter from the book: Loizides, F et al. 2020. Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from the INTERACT 2019 Workshops.

    Buy Paperback

    In this paper, we posit that giving users control over an artificial intelligence (AI) model may be dangerous without their proper understanding of how the model works. Traditionally, AI research has been more concerned with improving accuracy rates than putting humans in the loop, i.e., with user interactivity. However, as AI tools become more widespread, high-quality user interfaces and interaction design become essential to the consumer’s adoption of such tools. As developers seek to give users more influence over AI models, we argue this urge should be tempered by improving users’ understanding of the models’ behavior.

    Chapter Metrics:

    How to cite this chapter
    Junqueira Barbosa G. & Junqueira Barbosa S. 2020. You should not control what you do not understand: the risks of controllability in AI. In: Loizides, F et al (eds.), Human Computer Interaction and Emerging Technologies. Cardiff: Cardiff University Press. DOI: https://doi.org/10.18573/book3.af

    This is an Open Access chapter distributed under the terms of the Creative Commons Attribution 4.0 license (unless stated otherwise), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. Copyright is retained by the author(s).

    Peer Review Information

    This book has been peer reviewed. See our Peer Review Policies for more information.

    Additional Information

    Published on May 7, 2020