ABSTRACT
Objective To assess medical students’ attitudes, knowledge, opinions, and expectations regarding medical artificial intelligence solutions, according to their sex and year of study.
Methods This cross-sectional survey was a single-center study conducted at a medical school in São Paulo, Brazil, using an online questionnaire.
Results Of 145 medical students who completed the survey (female, n=108/145, 74%; age, 18-25 years, n=129/145, 89%), 71 (49%) classified their artificial intelligence knowledge as intermediate, 137 (95%) wished that artificial intelligence would be regulated by the government. If artificial intelligence solutions were reliable, fast, and available, 74% (107/145) intended to use artificial intelligence frequently, but fewer participants approved artificial intelligence when used by other health professionals (68/145, 47%) or directly by patients (26/144, 18%). The main benefit of artificial intelligence is in accelerating diagnosis and disease management (116/145, 80%) and problem is overreliance on artificial intelligence and loss of medical skills (106/145, 73%). Students believed that artificial intelligence would facilitate physicians’ work (125/145, 86%); increase the number of appointments (76/145, 53%); decrease their financial gain (63/145, 43%); and not replace their jobs but be an additional source of information (102/145, 70%). According to 88/145 (61%) participants, legal responsibility should be shared between the artificial intelligence manufacturer and physicians/hospitals.
Conclusion Medical students showed positive perceptions of and attitudes towards artificial intelligence in healthcare. They presented interest in artificial intelligence and believed in its incorporation in daily clinical practice, if regulated, is user-friendly and accurate. However, concerns regarding this technology must be addressed.
Keywords
Artificial intelligence; Adoption; Perception; Students, medical; Knowledge, attitudes, practice; Surveys and questionnaires
Highlights
■Most students have a positive attitude towards medical artificial intelligence.
■Artificial intelligence solutions would be frequently adopted if accurate, fast, and user-friendly.
■Expectations include work facilitation and increased diagnosis and management speed.
■Most fear losing medical skills to artificial intelligence and desire governmental regulations.
In Brief
Artificial intelligence solutions are increasingly being used in medicine. Studies evaluating the attitudes, knowledge, and opinions of medical students regarding medical artificial intelligence in Latin America are scarce. Thus, we performed the first online survey on this topic among medical students of Faculdade Israelita de Ciências em Saúde Albert Einstein (FICSAE) in 2023-2024.
INTRODUCTION
Artificial intelligence (AI) technologies are widely used in everyday applications worldwide, with significant societal and economic changes.(1) Artificial intelligence can learn from large sets of data, including medical texts, images, and patient records. This makes medical AI solutions applicable to various areas including diagnosis,(1) therapy,(2) clinical decision support,(3)personalized medicine,(4)clinical research,(5) drug development,(6) administrative tasks,(7)scientific or layman writing,(8,9) and virtual healthcare assistance.(10) There is a growing focus on how prepared the healthcare workforce is, regarding this new situation,(11)and it has been recommended that AI solutions be developed in close collaboration with practicing clinicians.(12)
To be accepted and implemented, AI solutions with a clinical impact must be evaluated by physicians. Physicians play a crucial role in deciding whether to embrace this technology. Therefore, it is important to explore and understand their perspectives. Medical students, as the future generation of physicians, will also be affected by this new technology. Several studies have demonstrated that AI influences the choice of some specialties, such as radiology, in 25-49% of cases.(13-15)
Medical students will need to familiarize themselves with AI basics, such as machine learning (ML) and deep learning (DL), and not only understand their potential applications but also their limitations as well as recognize or identify the mistakes of AI algorithms.(16) In recent years, the term “AI literacy” has been used to describe the ability to critically evaluate, effectively collaborate, and use AI as a tool for day-to-day provision of healthcare services.(17,18)This requires changes in the general medical education curriculum, thereby increasing the reliance on AI.(16) One study showed that 59% of students agreed that AI should be part of medical training and 73% wanted more teaching focusing on AI in medicine.(19) More recently a more direct approach to the problem was revealed by two major educational institutions in the US, to launch a dual degree in medicine and AI.(20)
Currently, AI is not part of daily medical practice for various reasons. One being the reluctance of physicians to embrace change along with their potential false perceptions and negative sentiment towards AI solutions.(21,22) Thus, there is a need to better understand medical students’ perspectives, intentions to use, and possible attitudes toward the use of AI technology in medicine, especially in parts of the world where current knowledge of this subject is lacking, such as in Latin America. Opinion surveys, which serve as important tools for evaluating satisfaction with a particular service, consist of a list of questions whose objective is to extract certain data from a group of people.(23)Previous studies on medical AI have shown different results according to sex and year of study in the medical school.(24)
OBJECTIVE
We hypothesized that Brazilian medical students are not well informed on medical artificial intelligence and have a negative attitude towards it. The main objective of this study was to assess the medical students’ attitudes, knowledge, opinions, and expectations regarding artificial intelligence by conducting a cross-sectional survey at a single-center private medical school in São Paulo, Brazil. The survey focused on practical aspects of the current or potential use of artificial intelligence solutions in daily medical practice. The secondary objective was to verify whether there were sex- and year of study-based differences in their responses.
METHODS
Ethical consideration
We conducted a cross-sectional observational study using an opinion survey via a questionnaire approved by the Ethics Committee of the Hospital Israelita Albert Einstein (CAAE: 30749620.6.0000.0071; # 6.729.652). The online survey was designed using the SurveyMonkey platform (SurveyMonkey, Inc., San Mateo, CA, USA; www.surveymonkey.com) and a link was shared with medical students at Faculdade Israelita de Ciências da Saúde Albert Einstein (FICSAE). It was anonymous and confidential, ensuring that only the authors of the study had access to the responses. Participation was voluntary and no financial incentives were provided.
Questionnaire design and data collection processes
The questionnaire, developed by the authors to assess opinions on the use of AI, was based on two previous surveys conducted on physicians at the same institution regarding Telemedicine and AI.(25,26) The questionnaire included the following five sections: Section 1 included the informed consent form (ICF; Question 1). Section 2 (Questions 2-9) gathered information about the physicians, including sex, age, highest level of education, years of study at the medical school, self-assessed knowledge of AI in general (not specific for healthcare AI solutions), and their use of computer or smartphone applications embedding AI solutions for daily tasks, such as WhatsApp, Instagram, Facebook, Waze, Google Maps, and bank apps. This section also asked about the medical students’ awareness and previous experience of using medical AI solutions for healthcare in their daily work. Section 3 (Questions 10-17) explored participants’ thoughts on AI solutions to support diagnosis, patient management, and subsidiary examination interpretation of diseases such as on COVID-19, and the use of AI solutions to support diagnosis and/or treatment of diseases by other staff members, such as nurses or physiotherapists; or directly by the patient. It also included a hypothetical exercise to assess students’ levels of anxiety and actions taken if an AI algorithm used to assess their moles reported a suspicious diagnosis of melanoma, and questions about AI’s expected benefits and problems. Section 4 (Questions 18-23) assessed the intention to adopt AI, AI use and how AI may interfere with work, the expectations of physicians being replaced by AI solutions, and AI potential interference on professional financial earnings. Section 5 (Questions 24-27) evaluated how participants would act in possible scenarios of disagreements between AI and physicians, and legal and governmental regulatory issues. Along with the questions, there were many opportunities to comment on the responses in an open text box. Students could choose to skip any questions so that the number of respondents could vary among questions.
The survey comprised two stages. In the first stage, from October 2023 to January 2024, the questionnaire link was sent via WhatsApp (WhatsApp LLC, Meta, Inc., version 24.9.4) to all FICSAE medical students (n=697). The message included a brief introduction inviting students to participate in the survey on the SurveyMonkey platform (SurveyMonkey, Inc.). In the second stage, in April 2024, an email containing the questionnaire’s QR code and the same introductory message, via WhatsApp, was sent to all the FICSAE medical students. Additionally, some students were approached individually by one of the authors (BGB), another medical student at the institution. To prevent multiple responses from the same participant, the SurveyMonkey Internet Protocol (IP) address-blocking mechanism, used to identify and notify users if the questionnaire has already been completed, was implemented. The questionnaire was initially sent to two medical students from the target population to pilot test the platform and their responses were included in the analysis. Our survey followed the Checklist for Reporting Survey Studies (CROSS) guidelines for survey study reporting, and the estimated completion time for the questionnaire was approximately 6 min. The Supplementary Material shows the full questionnaire, translated to English.
If the participant’s response to the first question on ICF was “accepted,” they were able to continue responding to subsequent questions; otherwise, the survey was terminated. Only the FICSAE medical students with “accepted” ICF response were included in the study. Although there were no exclusion criteria for entering the study, question 5 (Which year of study are you in the medical school?) was used for quality control.
No specific technologies were evaluated. Most of the questions regarding the clinical use of AI concerned algorithms for diagnosing or managing diseases, aiming to capture possible expectations of these future physicians in clinical practice. The questions were based on the authors’ previous experience of developing AI algorithms for clinical support computer systems.
Statistical analysis
The participants were stratified by sex and also divided into three groups based on the year of study at the medical school: group 1 (year 1 to 2), group 2 (year 3 to 4), and group 3 (year 5 to 6), based on their responses to Question 5. Statistical analyses were performed using the χ2 test in Prism software (version 6, GraphPad Software, Inc., San Diego, CA, USA) to compare group differences (sex-based and year of study). Statistical significance was set at p<0.05.
RESULTS
Participant characteristics
In total, 170 participants provided consent for this study, a response rate of 24% (170/697). One respondent was excluded because of an inappropriate answer to the quality control question (i.e., year of study at the medical school). Thus, the overall questionnaire completion rate was 85.8% (145 of 169). Because the participants were allowed to skip any question, the number of respondents varied among the questions.
Table 1 presents the profiles of the students. Of the 169 consenting participants, most were female (n=124, 73%); aged 18-25 years (n=145, 86%); and held no prior academic titles (n=164, 97%). Regarding their year of study at the medical school, 69 (41%), 53 (31%), and 47 (28%) were in their study year 1 or 2, 3 or 4, and 5 or 6 (the internship cycle in Brazil), respectively.
Regarding their knowledge of AI, of the 145 participants who completed the questionnaire, nearly half (n=71, 49%) and rated their AI knowledge as intermediate, whereas 10% (n=14) rated it as high. Most participants (n=100, 69%) frequently (n=65, 45%) or always (n=35, 24%) used AI algorithms for daily tasks outside the medical field.
More than half (n=81, 56%) were either unaware (30, 20.7%) or uncertain (51, 35.2%) about AI solutions for medicine. Despite this, 61 (42%) had experienced AI during their medical course, although most (n=84, 58%) either had no experience (n=61, 42%) or were uncertain about it (n=23, 16%).
Also, a significant difference was observed between the sexes regarding the self-assessment of knowledge of AI. Women mostly classified their knowledge as low or intermediate, while men considered it as intermediate or high (p<0.0001).
Perspective on use of artificial intelligence solutions
Table 2 presents the responses to questions 10-17. Most participants supported the use of AI solutions for diagnosing (n=131/144, 91%); patient management (n=111/145, 78%), and interpretation of imaging examinations (n=130/145, 91%). However, participants’ opinions were divided regarding the usage of AI by other health professionals (nurses or physiotherapists) for diagnosis or patient management, with 47%, 19%, and 34% in favor, uncertain, or against, respectively. Moreover, participants disapproved of the use of AI solutions by the patients, with 64%, 18%, and 18% against, uncertain, or in favor, respectively. When considering themselves as the patients, participants acknowledged that the use of AI for diagnosing by non-specialists, could cause distress on some types of diagnoses, such as skin melanoma (Question 15).For instance, when asked hypothetically about their reaction if a melanoma detection AI showed a high probability of melanoma diagnosis, when used to diagnose a mole, 98% (n=142/145) reported a high level of anxiety or would be anxious about needing to see a dermatologist as soon as possible.
Participants mentioned benefits, including faster (n=116/145, 80%) and more accurate diagnoses (n=98/145, 68%); reduced healthcare costs (n=70/145, 48%); and a reduction in the number of subsidiary examinations (n=61/145, 42%). However, concerns were raised about over-reliance on AI, leading to the loss of medical skills (n=106/145, 73%), incorrect diagnoses or management reports (n=103/145, 71%), and deterioration of the physician-patient relationship (n=89/145, 61%). Additionally, some participants were concerned about the lack of transparency in AI solutions (n=43/145, 30%) and the risk of misusing patient information (n=40/145, 28%). There were no significant differences in responses between sexes.
Figure 1 shows the total number of responses on possible interference of AI solutions in daily work processes, number of attendances, and utility for diagnosis and/or management. Most respondents believed that AI would streamline the work process (n=125/145; 86%), increase the number of appointments (n=76/144; 53%), and provide support for diagnosis (n=122/145; 84%) and management (n=79/145; 54%).
A) How artificial intelligence would interfere with work process assuming the availability of a reliable artificial intelligence algorithm that takes up to 2 min of participants’ time B) What artificial intelligence solutions would do regarding the volume of appointments in participants’ day-to-day work. C) Artificial intelligence usefulness for clinical support (diagnosis, management, both, no usefulness, or harmful) (Questions 19-21)
Figure 2 shows the intentions by sex in adopting medical AI support solutions, regarding medical job replacements, and financial interference. Overall, of the 145 participants, 107 (74%) expressed intention to adopt AI solutions (if reliable, quick [32], and regulated [17]) frequently (n=58/145), most of the time (n=32/145), and always (n=17/145). However, 26% (37/145) mentioned that they would sometimes (n=32/145) or rarely (n=5/145) use AI solutions. Interestingly, regarding this question, there was a significant difference between female and male students. Female participants mostly responded “sometimes”, whereas male students had proportionally more of “rarely or always”, with a significant difference (p=0.0013). However, when grouping the intention to adopt into “against” (never, rarely, or sometimes) and “in favor” (frequently, most of the time, and always), there was no significant difference (p=0.8249). In terms of medical job replacement, 102/145 (70%) participants believed that AI solutions would not replace physicians, but serve as an additional information source. Male participants seemed to be significantly less optimistic than their female counterparts regarding this (p=0.0477). However, some male participants expected to achieve more financial gain and doubted this than did their female peers regarding the use of AI solutions (p=0.0026). However, in both sexes, most believed that their earnings would decrease (n=63/145; 44%) or not be affected (n=46/145; 32%).
A) How often participants would adopt artificial intelligence in their medical routine, if it were available and proves to be reliable in real-life and took up to 2 min of participants’ time. B) Believe regarding what artificial intelligence would do to image-based medical jobs, such as radiologists, pathologists, and dermatologists. C) In case artificial intelligence is adopted in daily basis, believe regarding what would happen to professional financial earnings. (Questions 18, 22, and 23, by sex)
Table 3 details responses to questions 24-27.When AI and physicians disagreed, two scenarios were proposed: one where hypothetically, AI and physicians had similar accuracy and another where AI achieved better accuracy. In the former, 72% of participants favored seeking a third opinion (n=104/144), whereas 26% believed the medical opinion should be followed (n=38/144). In the latter, 82% opted for a third opinion (n=119), whereas only 9% considered that the AI recommendation be favored (n=13/145). Regarding legal responsibility, 61% (n=88/144) felt it should be shared between the AI manufacturer and physicians/hospitals. However, male participants were more strongly in favor of physicians’ only responsibility (p=0.0081). Most (n=137/145, 95%) believed AI solutions require governmental regulatory approval, with significantly more female participants in favor (p=0.0001).
Regarding the second analysis performed, year-of-medical school-based, Tables 1S, 2S, 3S, and 4S in Supplementary Material show the results based on the three study groups. No significant differences were observed in any of the responses among the three groups, except about previous use of medical AI solutions (Question 9). Students in the 5 or 6 years of study group were aware of or had used AI solutions in medical practice more significantly than participants in the other groups (p=0.0147).
DISCUSSION
Our study revealed medical students’ perceptions regarding the use of AI in a private medical school in São Paulo, Brazil. Overall, 145 students completed the electronic questionnaire covering topics such as awareness, previous use, expectations, benefits, fears, legal issues, and regulations. Most participants were young female students (74%), reflecting the predominance of females at the Faculdade de Medicina Albert Einstein (453/697, 65%). This trend was also observed in a similar study in Kuwait, with 89% participants being women.(27)
In this study, 87% were aged between 18 and 25 years and 97% had no prior academic title, as was expected. The students were distributed across three batches of students, in the 6-year medical education in Brazil. Notably, the first- and second-year students were the majority (41%), probably because they had more free time to participate in surveys, were new to the environment, and were more willing to engage.
Although 69% participants frequently used AI solutions in their daily lives, their self-assessed knowledge of AI was generally moderate to low, with 49% and 41% ratings, respectively. A meta-analysis comparing different countries showed that students from Germany, Lebanon, Kuwait, and Pakistan had higher levels of AI knowledge (728, 605, 513, and 713 successes per 1000 observations, respectively) than those from the United States, Nigeria, the United Arab Emirates, and England (139, 337, 291, and 93 successes per 1000 observations, respectively). Students from Egypt, Saudi Arabia, and Turkey had intermediate knowledge (492, 449, and 393 successes per 1000 observations, respectively).(28)
There are also notable differences between sexes on perceptions of AI literacy. Female participants expressed significantly less knowledge than males in our study. This was also true for students from India, with a higher proportion of female medical students reporting feeling extremely unknowledgeable or unknowledgeable about the fundamentals of AI technology and its applications in healthcare than did their male counterparts (66/90, 77% versus 90/126, 71%, respectively; p=0.001).(24) We believe there has been a female gap in Science, Technology, Engineering, and Mathematics, including AI and computer science. This disparity can influence the collective knowledge and experiences of different sexes. It is important to encourage more diverse participation because it is important to optimize the potential of all individuals, regardless of sex, in AI.
We found 44% of participants were aware of medical AI solutions and had experienced its use in medical practice. This is 42% higher than a similar study result in China (13%).(29) Although this reasonable result exceeded our expectations, it is still insufficient. This lack of knowledge presents an important barrier to the effective use of AI, highlighting the urgent need to incorporate comprehensive AI education into healthcare curricula. The exposure of medical students to AI solutions in practical settings can play a pivotal role in the technology’s acceptance among future physicians.(30) Our study also shows that the students strongly favored AI solutions to support diagnosis (91%) and examination interpretations (90%), with a slightly lower support for patient management (77%). This endorsement of clinical support for AI solutions is another interesting finding, which mirrors that of our previous study among physicians in the same hospital.26) It appears that AI’s influence on patient management is perceived as riskier than diagnosis or examination interpretation. However, other health staff members did not fully accept the use of AI solutions for medical purposes, with 34% and 20% showing opposition and uncertainty, respectively, about their benefits. Regarding the use of AI by patients, there was even stronger resistance, with 64% opposing and 18% uncertain. Perhaps this is because participants believed that allowing AI too much “power” in a non-medical background context could be risky. Curiously, when placed in the patient’s position, students expressed high anxiety and a strong preference for seeking a dermatologist immediately if an AI solution was exploited to diagnose melanoma in a hypothetical mole.
The most important benefits expected from AI implementation in daily medical practice by students are increased speed, higher accuracy, reduced costs in the solicitation of subsidiary examinations, greater access to healthcare, and greater participation of patients in their care. As previous studies showed, most students have a positive attitude towards AI, recognizing its potential benefits, including improved diagnostic accuracy, better healthcare access, and reduced clinical workloads.(29,31,32) In another study, students seemed to be more accepting than fearful of the technology.(18) The most significant concerns for our participants were the fear of physicians relying too much on AI, losing their medical skills, errors in diagnosis and management, and deterioration in the medical-patient relationship. Other concerns include lack of transparency, wrongful use of medical information by employers or insurance companies, confidentiality issues, and increased costs. All of these concerns are shared globally. In other studies, students in developing countries also expressed skepticism, fearing that AI could dehumanize their care.(24) Additionally, even students from developed countries, such as Vietnam, perceived digitalization as a threat, especially in relation to the patient-physician relationship.(33)Several medical students mentioned concerns regarding the unpredictability of results, errors related to clinical AI, operator dependence, poor AI performance in unexpected situations, and lack of empathy or communication.(29)However, our research found that 74% were willing to adopt a reliably regulated AI solution requiring little time to perform. Of 14 other studies that reported the intention to use AI medical solutions, over 60% of respondents in 10 studies were willing to incorporate AI into their clinical practice, especially if it demonstrates high accuracy, efficiency, and ease of use.(29)
Among our participants, 86% stated that AI solutions would facilitate medical work by increasing the number of patients seen (53%), aiding diagnosis (84%), and supporting management (55%). Furthermore, 70% believed that AI would serve as an additional source of information rather than completely (1%) or partially (28%) replacing physicians in the medical imaging field, with male students showing significantly more pessimism than their female counterparts. In Australia, most medical students expressed little concern about the impact of AI on their job security as doctors,(19) while others believed that AI would make healthcare jobs obsolete.(28) Regarding financial gains, 44% assumed that they would experience a decrease in earnings, with male students displaying slightly more optimism.
Three intriguing hypothetical questions were posed to our participants to explore approach to address possible discrepancies between human and AI solution recommendations. When humans and AI were considered as showing similar accuracy, 26% were in favor of physicians’ opinions, whereas 72% requested a third opinion. When AI were considered as showing higher accuracy than humans in a given task, only 7% would favor the physician, whereas 82% would seek a third opinion. We believe this to be extremely important for physicians because the accuracy of results when implementing a real-world AI clinical support solution can differ considerably from those obtained during an algorithm development process. However, there are few real-world deployments of AI solutions using clinical trials, since such are expensive and laborious to conduct. therefore, physicians and students must acknowledge these limitations when supporting this new technology. Regarding the legal liability, 29% believed that the physician is ultimately responsible for any act, while 61% wanted this shared between humans, machines, and institutions. This is a contentious issue because there are currently no established legal regulations regarding AI-based decision support tools. In Sultan, half of the students believed that manufacturers and physicians should bear legal responsibility for medical errors caused by these tools, while 36% thought that only physicians should be legally liable.(31)In our study, the majority (95%) of students were in favor of the government regulating AI solutions. Interestingly, female participants were significantly more inclined towards regulation. In Germany, most students supported the need for regulations.(34)
Limitations
First, this study used a cross-sectional design, providing an initial picture of medical students’ literacy, opinions, and expectations regarding the new subject of AI. Second, our survey was performed at only one medical school; therefore, it may not have broad applications, such as to all medical schools in Brazil or South America because various setting-specific factors can influence the results. However, it is a starting point to focus on discussions around this important topic in the medical education agenda. Third, because our questionnaire was sent via email using QR codes or links; therefore, we could have been biased towards students who had more interest in technology.
CONCLUSION
This study supports the use of artificial intelligence in medical practice. Most medical students showed positive perceptions of and attitudes towards artificial intelligence in healthcare. They showed interest in artificial intelligence and believe that it can be adopted in daily practice if regulated, user-friendly, and accurate. However, they have concerns about this new technology, especially, given their lack of knowledge on artificial intelligence.
Based on these findings, several important implications should be considered when adopting artificial intelligence in medical education. It is crucial to understand the limitations and potential biases of artificial intelligence algorithms, warranting human oversight and continuous monitoring to identify errors, ensuring final decision-making by the physicians. Additionally, it is important to design medical artificial intelligence courses that are both user-friendly and engaging to ensure that students acquire essential skills for their future medical careers.
SUPPLEMENTARY MATERIAL
Supplementary material 1
Supplementary material 2
Supplementary material 3
Supplementary material 4
Supplementary material 5
REFERENCES
- 1 Liu Y, Jain A, Eng C, Way DH, Lee K, Bui P, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med. 2020;26(6):900-8.
- 2 Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in Psychiatry, Psychology, and Psychotherapy. J Med Internet Res. 2019;21(5):e13216.
- 3 Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA. 2018;320(21):2199-200.
- 4 Schork NJ. Artificial intelligence and personalized medicine. Cancer Treat Res. 2019;178:265-83.
- 5 Woo M. An AI boost for clinical trials. Nature. 2019;573(7775):S100-2.
- 6 Fleming N. How artificial intelligence is changing drug discovery. Nature. 2018;557(7707):S55-7.
- 7 Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-8.
- 8 Kitamura FC. ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology. 2023;307(2):e230171.
- 9 Garcia P, Ma SP, Shah S, Smith M, Jeong Y, Devon-Sand A, et al. Artificial intelligence-generated draft replies to patient inbox messages. JAMA Netw Open. 2024;7(3):e243201.
- 10 Haug CJ, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N Engl J Med. 2023;388(13):1201-8.
- 11 McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? NPJ Digit Med. 2020;3(1):86.
- 12 Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy Augmented Intelligence in Health Care. J Med Syst. 2022;46(2):12.
- 13 Doumat G, Daher D, Ghanem NN, Khater B. Knowledge and attitudes of medical students in Lebanon toward artificial intelligence: a national survey study. Front Artif Intell. 2022;5:1015418.
- 14 Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging. 2020;11(1):14.
- 15 Tung AY, Dong LW. Malaysian medical students' attitudes and readiness toward ai (artificial intelligence): a cross-sectional study. J Med Educ Curric Dev. 2023;10:23821205231201164.
- 16 Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020;6(1):e19285.
- 17 Mousavi Baigi SF, Sarbaz M, Ghaddaripouri K, Ghaddaripouri M, Mousavi AS, Kimiafar K. Attitudes, knowledge, and skills towards artificial intelligence among healthcare students: a systematic review. Health Sci Rep. 2023;6(3):e1138.
- 18 Laupichler MC, Aster A, Meyerheim M, Raupach T, Mergen M. Medical students' AI literacy and attitudes towards AI: a cross-sectional two-center study using pre-validated assessment instruments. BMC Med Educ. 2024;24(1):401.
- 19 Stewart J, Lu J, Gahungu N, Goudie A, Fegan PG, Bennamoun M, et al. Western Australian medical students' attitudes towards artificial intelligence in healthcare. PLoS One. 2023;18(8):e0290642.
-
20 University College at The University of Texas at San Antonio. Hand-in-hand: UTSA Health Sa to launch nation's first dual degree in Medicine and AI. an Antonio, TX: University College at The University of Texas at San Antonio; 2024 [cited 2025 Mar 31]. Available from: https://www.utsa.edu/today/2023/09/story/UTSA-UT-Health-first-dual-degree-in-medicine-and-AI.html
» https://www.utsa.edu/today/2023/09/story/UTSA-UT-Health-first-dual-degree-in-medicine-and-AI.html - 21 Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform. 2021;28(1):e100450.
- 22 He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-6.
- 23 Sharma A, Minh Duc NT, Luu Lam Thang T, Nam NH, Ng SJ, Abbas KS, et al. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J Gen Intern Med. 2021;36(10):3179-87.
- 24 Kansal R, Bawa A, Bansal A, Trehan S, Goyal K, Goyal N, et al. Differences in knowledge and perspectives on the usage of artificial intelligence among doctors and medical students of a developing country: a cross-sectional study. Cureus. 2022;14(1):e21434.
- 25 Cordioli E, Giavina-Bianchi M, Pedrotti CH, Podgaec S. Brazilian medical survey on telemedicine since the onset of COVID-19. einstein (São Paulo). 2023;21:eAE0428.
- 26 Giavina-Bianchi M, Amaro E Jr, Machado BS. Medical expectations of physicians on ai solutions in daily practice: cross-sectional survey study. JMIRx Med. 2024;5:e50803.
- 27 Buabbas AJ, Miskin B, Alnaqi AA, Ayed AK, Shehab AA, Syed-Abdul S, et al. Investigating students' perceptions towards artificial intelligence in medical education. Healthcare (Basel). 2023;11(9):1298.
- 28 Amiri H, Peiravi S, Rezazadeh Shojaee SS, Rouhparvarzamin M, Nateghi MN, Etemadi MH, et al. Medical, dental, and nursing students' attitudes and knowledge towards artificial intelligence: a systematic review and meta-analysis. BMC Med Educ. 2024;24(1):412.
- 29 Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, et al. Acceptance of clinical artificial intelligence among physicians and medical students: a systematic review with cross-sectional survey. Front Med (Lausanne). 2022; 9:990604.
- 30 Li X, Jiang MY, Jong MS, Zhang X, Chai CS. Understanding medical students' perceptions of and behavioral intentions toward learning artificial intelligence: a survey study. Int J Environ Res Public Health. 2022;19(14):8733.
- 31 AlZaabi A, AlMaskari S, AalAbdulsalam A. Are physicians and medical students ready for artificial intelligence applications in healthcare? Digit Health. 2023;9:20552076231152167.
- 32 Moldt JA, Festl-Wietek T, Madany Mamlouk A, Nieselt K, Fuhl W, Herrmann-Werner A. Chatbots for future docs: exploring medical students' attitudes and knowledge towards artificial intelligence and medical chatbots. Med Educ Online. 2023;28(1):2182659.
- 33 Baumgartner M, Sauer C, Blagec K, Dorffner G. Digital health understanding and preparedness of medical students: a cross-sectional study. Med Educ Online. 2022;27(1):2114851.
- 34 McLennan S, Meyer A, Schreyer K, Buyx A. German medical students´ views regarding artificial intelligence in medicine: a cross-sectional survey. Plos Digit Health. 2022;1(10):e0000114.
Edited by
-
Associate Editor:
Luciano Cesar Pontes de Azevedo Instituto Israelita de Ensino e Pesquisa Albert Einstein, São Paulo, SP, Brazil ORCID: https://orcid.org/0000-0001-6759-3910
Publication Dates
-
Publication in this collection
17 Oct 2025 -
Date of issue
2025
History
-
Received
21 Sept 2024 -
Accepted
12 Mar 2025





AI: artificial intelligence.
