Survey on artificial intelligence in medicine

The Ethics Lab of the Center for Artificial Intelligence in Medicine (CAIM) collected voices from the public to learn about their perception of opportunities or risks associated to the use of artificial intelligence in medicine. The results will flow back into the center’s research activities.

Künstliche und menschliche Intelligenz treffen auch in der Medizin immer häufiger aufeinander. Wie wir damit umgehen wollen, ist eine wichtige gesellschaftliche Frage. © Photo by Tara Winstead, Pexels
Artificial and human intelligence are increasingly coming together in medicine as well. How we want to deal with this is an important social question. © Photo by Tara Winstead, Pexels

 

The Embedded Ethics Lab of the Center for Artificial Intelligence in Medicine (CAIM) is dedicated to providing ethical support for all CAIM activities. On the one hand, it wishes to engage in public dialogue; on the other, it aims to allow researchers to reflect upon the ethical dimensions of their work. The Ethics Lab's tasks also include teaching and research on ethics surrounding artificial intelligence (AI) applications in medicine.

One of the goals of CAIM is to develop new technologies for better patient care. This raises ethical questions about the use of AI in healthcare: Can it be used as an assistive tool supporting healthcare professionals? Can it afford doctors more time to spend with their patients? And where is AI in medicine heading - will it be given a more prominent role, making predictions about our health and taking decisions for us?

"This fall, we used the Researchers’ Night, the University of Bern's science festival, to ask visitors about AI in medicine," says Rouven Porz, associate professor of medical ethics and ethics expert at the CAIM Ethics Lab. Visitors were invited to either fill out an online form or to post notes on screens at the CAIM exhibition to express their opinions on their associations with AI in healthcare and how they perceive the future in this field. Results of the non-representative survey have now been analyzed.

Integrating more AI into the medical curriculum

A total of 120 responses were received. In the quantitative online survey, participants rated their own knowledge regarding AI and some trends on a scale of 1 to 10 (none to complete agreement). 91 percent agreed predominantly (36%) or entirely (55%) with the statement, "I have a good understanding of what is meant by AI." 89 percent agreed to the sentence "AI will have a decisive impact on our future" and 78 percent to the fact that AI raises many ethical questions. Free statements emphasized the importance of educating the public about AI and of increasingly integrating the topic into medical studies.

AI - competition or assistance for humans?

More mixed were the answers to the quantitative online questions "AI in medicine scares me" and "AI will never replace our human intelligence." This is in line with the qualitative free statements posted by visitors on the screens on site, which can roughly be categorized as "fears" and "opportunities."

 

Szene aus einem Forschungsprojekt zum Einsatz Künstlicher Intelligenz (hier als Unterstützung bei der Diagnose von Lungenerkrankungen wie Covid-19 anhand radiologischer Bilder). @ Adrian Moser für ARTORG Center
Research project on the use of Artificial Intelligence to assist in the diagnosis of lung diseases such as Covid-19 from radiological imaging. @ Adrian Moser for ARTORG Center

 

The on-site non-scientific trend survey provided indications that the "fears" are mainly associated to the differences between AI and humans and that a certain competition is perceived between the two. Some feared that AI would replace humans, as it could be faster, better, or stronger. Nevertheless, many "fears" appear likewise as "opportunities." For example, the lack of emotionality in AI could lead to 'rationality' that may make healthcare diagnostics faster and more accurate, which in turn would lead to cost reduction. Typical errors due to human tiredness or distraction could be identified and avoided. With its higher accuracy and analytical power, AI is also expected to enable personalized treatments.

More communication and patient involvement needed

Concerning society, further feedback emphasized the need for control options, such as legal frameworks and the possibility of "pulling the plug" on AI. Considerations of responsibility accompanied this: on the one hand, AI developers would have considerable influence here, which they would have to use responsibly. On the other hand, the responsibility for medical decisions should not be relinquished to the AI, leaving humans with the last say. Several times it was voiced that, with or without AI, communication between physicians, patients, and engineers need to be increased and that the inclusion of patients in AI development for medical applications is central.

Feeding results back into AI research

For Rouven Porz of the Embedded Ethics Lab, the results show, first of all, that CAIM is well advised to address ethical aspects of artificial intelligence: "Especially in medicine, AI raises hopes and fears. Which of these become true also depends on how we further develop AI." And Claus Beisbart, philosopher of science at CAIM's Embedded Ethics Lab, adds, "We are taking the results of this survey into our discussions with young researchers. In addition, we also want to incorporate them into our research at the Lab."

Forschen und lehren am CAIM Ethics Lab: Rouven Porz (links), assoziierter Professor für Medizinethik und Leiter Medizinethik Insel Gruppe Bern, und Claus Beisbart, Professor für Wissenschaftsphilosophie an der Universität Bern und promovierter Physiker. © Universität Bern / Bild: Monika Kugemann
They research and teach at the CAIM Ethics Lab: Rouven Porz, associate professor of medical ethics and ethics expert, and Claus Beisbart, professor of philosophy of science at the University of Bern and physicist. © Monika Kugemann for CAIM

CAIM Embedded Ethics Lab

The Center for Artificial Intelligence in Medicine at the University of Bern (CAIM) integrates ethical aspects early in the research on artificial intelligence applications for healthcare. In doing so, the CAIM Ethics Lab offers support and advice to researchers on ethical issues surrounding AI in medicine from the beginning of a research project to its implementation in clinical practice. It also engages in research exploring the ethical dimensions of AI in healthcare. The ethics team also covers the fields of medical ethics, philosophy of science, politics, and law. It involves various faculties of the University of Bern as well as the Inselspital, Bern University Hospital.

About the author

Monika Kugemann is communications responsible at the Center for AI in Medicine (CAIM).

Top