top of page

WHY
The AI Learning Lab

Scientific Background of Our Project

Novel applications of artificial intelligence (AI) are increasingly entering the work of medical diagnosis. These AI algorithms are opaque, offering limited chances to know how the outcomes are developed [1]. As a result, the professionals who work with them experience unexpected, ambiguous situations where they have limited capacity to grapple with and act mindfully on the AI applications [2]. Therefore, without deep, critical knowledge, professionals are prone to commit errors, their attentions become biased, their cognitive processes remain at the surface, and their actions tend to be led by the algorithms [3, 4]

Therefore developing deep, critical knowledge and skills of working with these

powerful opaque technologies is crucial for new generations of professionals. Currently, the formal training of medical professionals focuses on the classic medical training, and learning how to work with these novel technologies is expected to happen through extra-curriculum training [5, 6]. However, the current extra-curriculum training programs that are offered to medical professionals primarily focus on the basic awareness about these technologies and hardly cover the practical expertise of working mindfully with AI applications under different working conditions [7].

 

Medical professionals are assumed to gain such learning as they stumble on these

tools in their clinical work. Contrastingly, research on work-practice learning has shown that under high work pressure [8], especially when the decisions are high-stakes, people have limited opportunity for experimentation and deep learning [9]. Instead of developing deep knowledge, professionals may stay at the surface understanding and mainly learn how to delegate their work to these systems [10]. At best, their learning will be driven by unsystematic, chance-driven casual experiences [9]. The need thus arises for systematic training to help medical professionals prevent mistakes and malpractices when working with AI.

[1] Burrell, Jenna. 2016. “How the Machine ‘thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 1-12.

[2] Zhang, Zhewei, Youngjin Yoo, Kalle Lyytinen, and Aron Lindberg. 2021. “The Unknowability of Autonomous Tools and the Liminal Experience of Their Use.” Information Systems Research, August. https://doi.org/10.1287/isre.2021.1022.

[3] Jarrahi, Mohammad Hossein, Gemma Newlands, Min Kyung Lee, Christine T. Wolf, Eliscia Kinder, and Will Sutherland. 2021. “Algorithmic Management in a Work Context.” Big Data & Society 8 (2): 1-18.

 

[4] Newell, Sue, and Marco Marabelli. 2015. “Strategic Opportunities (and Challenges) of Algorithmic Decision-Making: A Call for Action on the Long-Term Societal Effects of ‘datification.’” The Journal of Strategic Information Systems 24 (1): 3–14.

​[5] Tajmir, Shahein H., and Tarik K. Alkasab. 2018. “Toward Augmented Radiologists: Changes in Radiology Education in the Era of Machine Learning and Artificial Intelligence.” Academic Radiology 25 (6): 747–50.

[6] Waite, Stephen, Zerwa Farooq, Arkadij Grigorian, Christopher Sistrom, Srinivas Kolla, Anthony Mancuso, Susana Martinez-Conde, Robert G. Alexander, Alan Kantor, and Stephen L. Macknik. 2020. “A Review of Perceptual Expertise in Radiology-How It Develops, How We Can Test It, and Why Humans Still Matter in the Era of Artificial Intelligence.” Academic Radiology 27 (1): 26–38.

[7] Schuur, Floor, Mohammad H. Rezazade Mehrizi, and Erik Ranschaert. 2021. “Training Opportunities of Artificial Intelligence (AI) in Radiology: A Systematic Review.” European Radiology, 8: 6021-6029.

[8] Sonnadara, Ranil R., Aaron Van Vliet, Oleg Safir, Benjamin Alman, Peter Ferguson, William Kraemer, and Richard Reznick. 2011. “Orthopedic Boot Camp: Examining the Effectiveness of an Intensive Surgical Skills Course.” Surgery 149 (6): 745–49.

[9] Beane, M. 2019. “Shadow Learning: Building Robotic Surgical Skill When Approved Means Fail.” Administrative Science Quarterly. 64 (1): 87-123.

 

[10] Goddard, Kate, Abdul Roudsari, and Jeremy C. Wyatt. 2011. “Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators.” Journal of the American Medical Informatics Association: JAMIA 19 (1): 121–27.

bottom of page