Information
References
Contents
Academic Editor
- Jaume Sastre-Garriga
Download
[1]Healthcare Information and Management Systems Society. AI ADOPTION IN HEALTHCARE REPORT 2024 [Internet]. 2024. Disponible en: https://cdn.sanity.io/files/sqo8bpt9/production/68216fa5d161adebceb50b7add5b496138a78cdb.pdf (Accedido: 12 Diciembre 2024).
[2]Garrison GM, Bernard ME, Rasmussen NH. 21st-century health care: the effect of computer use by physicians on patient satisfaction at a family medicine clinic. Family Medicine. 2002; 34: 362–368.
[3]Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, et al. Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties. Annals of Internal Medicine. 2016; 165: 753–760. https://doi.org/10.7326/M16-0961.
[4]ChatGPT. En: Wikipedia, la enciclopedia libre [Internet]. 2024. Disponible en: https://es.wikipedia.org/w/index.php?title=ChatGPT&oldid=164117416 (Accedido: 14 Diciembre 2024).
[5]Statista. Inteligencia artificial (IA) [Internet]. 2025. Disponible en: https://es.statista.com/temas/6692/inteligencia-artificial-ia/?utm_source=chatgpt.com (Accedido: 14 Diciembre 2024).
[6]Au Yeung J, Wang YY, Kraljevic Z, Teo JTH. Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep? Practical Neurology. 2023; 23: 476–488. https://doi.org/10.1136/pn-2023-003757.
[7]Kalani M, Anjankar A. Revolutionizing Neurology: The Role of Artificial Intelligence in Advancing Diagnosis and Treatment. Cureus. 2024; 16: e61706. https://doi.org/10.7759/cureus.61706.
[8]Lifschitz V. John McCarthy (1927-2011). Nature. 2011; 480: 40. https://doi.org/10.1038/480040a.
[9]Hirani R, Noruzi K, Khuram H, Hussaini AS, Aifuwa EI, Ely KE, et al. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life (Basel, Switzerland). 2024; 14: 557. https://doi.org/10.3390/life14050557.
[10]Gottfredson LS. Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence. 1997; 24: 13–23.
[11]Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is All you Need. En: Advances in Neural Information Processing Systems [Internet]. Curran Associates, Inc. 2017. Disponible en: https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (Accedido: 14 Diciembre 2024).
[12]Yin S, Fu C, Zhao S, Li K, Sun X, Xu T, et al. A survey on multimodal large language models. National Science Review. 2024; 11: nwae403. https://doi.org/10.1093/nsr/nwae403.
[13]Almyranti M, Sutherland E, Ash DN, Eiszele S. Artificial Intelligence and the health workforce: Perspectives from medical associations on AI in health [Internet]. Paris: OECD. 2024. Disponible en: https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-and-the-health-workforce_9a31d8af-en (Accedido: 1 Diciembre 2024).
[14]Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, et al. A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine. 2022; 133: 102423. https://doi.org/10.1016/j.artmed.2022.102423.
[15]Nazer LH, Zatarah R, Waldrip S, Ke JXC, Moukheiber M, Khanna AK, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health. 2023; 2: e0000278. https://doi.org/10.1371/journal.pdig.0000278.
[16]Rodriguez JA, Alsentzer E, Bates DW. Leveraging large language models to foster equity in healthcare. Journal of the American Medical Informatics Association: JAMIA. 2024; 31: 2147–2150. https://doi.org/10.1093/jamia/ocae055.
[17]Harari YN. Nexus: Una breve historia de las redes de información desde la Edad de Piedra hasta la IA. Debate: Barcelona. 2024.
[18]IBM. ¿Qué son los datos sintéticos? [Internet]. 2023. Disponible en: https://www.ibm.com/es-es/topics/synthetic-data (Accedido: 28 Diciembre 2024).
[19]Yang R, Ning Y, Keppo E, Liu M, Hong C, Bitterman DS, et al. Retrieval-Augmented Generation for Generative Artificial Intelligence in Medicine. arXiv. 2024. https://doi.org/10.48550/arXiv.2406.12449. (preprint)
[20]Hopkin G, Branson R, Campbell P, Coole H, Cooper S, Edelmann F, et al. Considerations for regulation and evaluation of digital mental health technologies. Digital Health. 2024; 10: 20552076241293313. https://doi.org/10.1177/20552076241293313.
[21]Farah L, Borget I, Martelli N, Vallee A. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. Journal of Medical Internet Research. 2024; 26: e51514. https://doi.org/10.2196/51514.
[22]Sørensen NL, Bemman B, Jensen MB, Moeslund TB, Thomsen JL. Machine learning in general practice: scoping review of administrative task support and automation. BMC Primary Care. 2023; 24: 14. https://doi.org/10.1186/s12875-023-01969-y.
[23]Bundy H, Gerhart J, Baek S, Connor CD, Isreal M, Dharod A, et al. Can the Administrative Loads of Physicians be Alleviated by AI-Facilitated Clinical Documentation? Journal of General Internal Medicine. 2024; 39: 2995–3000. https://doi.org/10.1007/s11606-024-08870-z.
[24]van Buchem MM, Kant IMJ, King L, Kazmaier J, Steyerberg EW, Bauer MP. Impact of a Digital Scribe System on Clinical Documentation Time and Quality: Usability Study. JMIR AI. 2024; 3: e60020. https://doi.org/10.2196/60020.
[25]Armbruster J, Bussmann F, Rothhaas C, Titze N, Grützner PA, Freischmidt H. “Doctor ChatGPT, Can You Help Me?” The Patient’s Perspective: Cross-Sectional Study. Journal of Medical Internet Research. 2024; 26: e58831. https://doi.org/10.2196/58831.
[26]Ng JY, Maduranayagam SG, Suthakar N, Li A, Lokker C, Iorio A, et al. Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. The Lancet. Digital Health. 2025; 7: e94–e102. https://doi.org/10.1016/S2589-7500(24)00202-4.
[27]Lee JM. Strategies for integrating ChatGPT and generative AI into clinical studies. Blood Research. 2024; 59: 45. https://doi.org/10.1007/s44313-024-00045-3.
[28]N5now. Alucinaciones, la mayor falla de la inteligencia artificial: cómo evitar que ChatGPT, Gemini y Meta AI respondancon errores [Internet]. 2024. Disponible en: https://blog.n5now.com/alucinaciones-la-mayor-falla-de-la-inteligencia-artificialcomo-evitar-que-chatgpt-gemini-y-meta-ai-respondancon-errores/ (Accedido: 28 Diciembre 2024).
[29]Vectara. Why building your own RAG stack can be a costly mistake [Internet]. 2024. Disponible en: https://www.vectara.com/blog/why-building-your-own-rag-stack-can-be-a-costly-mistake (Accedido: 28 Diciembre 2024).
[30]Vectara lanza una Puntuación de Coherencia Fáctica basada en un modelo de evaluación de alucinaciones de Hughes perfeccionado para mejorar la transparencia en las respuestas de GenAI [Internet]. 2024. Disponible en: https://www.businesswire.com/news/home/20240326338865/es/ (Accedido: 28 Diciembre 2024).
[31]Hughes S, Bae M, Li M. Vectara Hallucination Leaderboard [Internet]. 2023. Disponible en: https://github.com/vectara/hallucination-leaderboard (Accedido: 28 Diciembre 2024).
[32]Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Medical Education Online. 2024; 29: 2330250. https://doi.org/10.1080/10872981.2024.2330250.
[33]Crotty E, Singh A, Neligan N, Chamunyonga C, Edwards C. Artificial intelligence in medical imaging education: Recommendations for undergraduate curriculum development. Radiography (London, England: 1995). 2024; 30 Suppl 2: 67–73. https://doi.org/10.1016/j.radi.2024.10.008.
Academic Editor
- Jaume Sastre-Garriga
Article Metrics
Download
- Contents
Information
Download
Contents
1 Servicio de Neurología, Hospital Universitario 12 de Octubre, 28041 Madrid, Español
2 Servicio de Neurología, Hospital Universitario La Luz, 28003 Madrid, Español
Abstract
The advancement of artificial intelligence (AI), particularly generative AI, has significantly transformed the field of medicine, impacting healthcare delivery, medical education, and research. While the opportunities are substantial, the implementation of AI also raises important ethical and technical challenges, including risks related to data bias, the potential erosion of clinical skills, and concerns about information privacy.
AI has demonstrated great potential in optimizing both clinical and educational processes. However, its operation based on probabilistic prediction is inherently prone to errors and biases. Healthcare professionals must be aware of these limitations and advocate for a transparent, responsible, and safe integration of AI, while maintaining full ethical and legal responsibility for clinical decisions. It is essential to safeguard traditional clinical competencies and prioritize the use of AI in automating low-value, repetitive tasks. In biomedical research, transparency and independent validation are crucial to ensure the reproducibility of findings. Similarly, in medical education, structured training in AI is vital to enable professionals to apply these tools safely and effectively in clinical practice.
Generative AI offers a transformative potential for medicine, but its adoption must be guided by rigorous ethical standards. Comprehensive training, risk mitigation, and the preservation of core clinical skills are essential pillars for its responsible implementation. This transformation must be led by the medical profession to ensure a patient-centered approach to care.
Keywords
- artificial intelligence
- medical ethics
- delivery of health care
- biomedical research
- medical education
- clinical practice
References
- [1]
Healthcare Information and Management Systems Society. AI ADOPTION IN HEALTHCARE REPORT 2024 [Internet]. 2024. Disponible en: https://cdn.sanity.io/files/sqo8bpt9/production/68216fa5d161adebceb50b7add5b496138a78cdb.pdf (Accedido: 12 Diciembre 2024). - [2]
Garrison GM, Bernard ME, Rasmussen NH. 21st-century health care: the effect of computer use by physicians on patient satisfaction at a family medicine clinic. Family Medicine. 2002; 34: 362–368. - [3]
Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, et al. Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties. Annals of Internal Medicine. 2016; 165: 753–760. https://doi.org/10.7326/M16-0961. - [4]
ChatGPT. En: Wikipedia, la enciclopedia libre [Internet]. 2024. Disponible en: https://es.wikipedia.org/w/index.php?title=ChatGPT&oldid=164117416 (Accedido: 14 Diciembre 2024). Cited within: 1Google Scholar - [5]
Statista. Inteligencia artificial (IA) [Internet]. 2025. Disponible en: https://es.statista.com/temas/6692/inteligencia-artificial-ia/?utm_source=chatgpt.com (Accedido: 14 Diciembre 2024). - [6]
Au Yeung J, Wang YY, Kraljevic Z, Teo JTH. Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep? Practical Neurology. 2023; 23: 476–488. https://doi.org/10.1136/pn-2023-003757. - [7]
Kalani M, Anjankar A. Revolutionizing Neurology: The Role of Artificial Intelligence in Advancing Diagnosis and Treatment. Cureus. 2024; 16: e61706. https://doi.org/10.7759/cureus.61706. - [8]
Lifschitz V. John McCarthy (1927-2011). Nature. 2011; 480: 40. https://doi.org/10.1038/480040a. - [9]
Hirani R, Noruzi K, Khuram H, Hussaini AS, Aifuwa EI, Ely KE, et al. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life (Basel, Switzerland). 2024; 14: 557. https://doi.org/10.3390/life14050557. - [10]
Gottfredson LS. Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence. 1997; 24: 13–23. - [11]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is All you Need. En: Advances in Neural Information Processing Systems [Internet]. Curran Associates, Inc. 2017. Disponible en: https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (Accedido: 14 Diciembre 2024). - [12]
Yin S, Fu C, Zhao S, Li K, Sun X, Xu T, et al. A survey on multimodal large language models. National Science Review. 2024; 11: nwae403. https://doi.org/10.1093/nsr/nwae403. - [13]
Almyranti M, Sutherland E, Ash DN, Eiszele S. Artificial Intelligence and the health workforce: Perspectives from medical associations on AI in health [Internet]. Paris: OECD. 2024. Disponible en: https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-and-the-health-workforce_9a31d8af-en (Accedido: 1 Diciembre 2024). - [14]
Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, et al. A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine. 2022; 133: 102423. https://doi.org/10.1016/j.artmed.2022.102423. - [15]
Nazer LH, Zatarah R, Waldrip S, Ke JXC, Moukheiber M, Khanna AK, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health. 2023; 2: e0000278. https://doi.org/10.1371/journal.pdig.0000278. - [16]
Rodriguez JA, Alsentzer E, Bates DW. Leveraging large language models to foster equity in healthcare. Journal of the American Medical Informatics Association: JAMIA. 2024; 31: 2147–2150. https://doi.org/10.1093/jamia/ocae055. - [17]
Harari YN. Nexus: Una breve historia de las redes de información desde la Edad de Piedra hasta la IA. Debate: Barcelona. 2024. - [18]
IBM. ¿Qué son los datos sintéticos? [Internet]. 2023. Disponible en: https://www.ibm.com/es-es/topics/synthetic-data (Accedido: 28 Diciembre 2024). - [19]
Yang R, Ning Y, Keppo E, Liu M, Hong C, Bitterman DS, et al. Retrieval-Augmented Generation for Generative Artificial Intelligence in Medicine. arXiv. 2024. https://doi.org/10.48550/arXiv.2406.12449. (preprint) - [20]
Hopkin G, Branson R, Campbell P, Coole H, Cooper S, Edelmann F, et al. Considerations for regulation and evaluation of digital mental health technologies. Digital Health. 2024; 10: 20552076241293313. https://doi.org/10.1177/20552076241293313. - [21]
Farah L, Borget I, Martelli N, Vallee A. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. Journal of Medical Internet Research. 2024; 26: e51514. https://doi.org/10.2196/51514. - [22]
Sørensen NL, Bemman B, Jensen MB, Moeslund TB, Thomsen JL. Machine learning in general practice: scoping review of administrative task support and automation. BMC Primary Care. 2023; 24: 14. https://doi.org/10.1186/s12875-023-01969-y. - [23]
Bundy H, Gerhart J, Baek S, Connor CD, Isreal M, Dharod A, et al. Can the Administrative Loads of Physicians be Alleviated by AI-Facilitated Clinical Documentation? Journal of General Internal Medicine. 2024; 39: 2995–3000. https://doi.org/10.1007/s11606-024-08870-z. - [24]
van Buchem MM, Kant IMJ, King L, Kazmaier J, Steyerberg EW, Bauer MP. Impact of a Digital Scribe System on Clinical Documentation Time and Quality: Usability Study. JMIR AI. 2024; 3: e60020. https://doi.org/10.2196/60020. - [25]
Armbruster J, Bussmann F, Rothhaas C, Titze N, Grützner PA, Freischmidt H. “Doctor ChatGPT, Can You Help Me?” The Patient’s Perspective: Cross-Sectional Study. Journal of Medical Internet Research. 2024; 26: e58831. https://doi.org/10.2196/58831. - [26]
Ng JY, Maduranayagam SG, Suthakar N, Li A, Lokker C, Iorio A, et al. Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. The Lancet. Digital Health. 2025; 7: e94–e102. https://doi.org/10.1016/S2589-7500(24)00202-4. - [27]
Lee JM. Strategies for integrating ChatGPT and generative AI into clinical studies. Blood Research. 2024; 59: 45. https://doi.org/10.1007/s44313-024-00045-3. - [28]
N5now. Alucinaciones, la mayor falla de la inteligencia artificial: cómo evitar que ChatGPT, Gemini y Meta AI respondancon errores [Internet]. 2024. Disponible en: https://blog.n5now.com/alucinaciones-la-mayor-falla-de-la-inteligencia-artificialcomo-evitar-que-chatgpt-gemini-y-meta-ai-respondancon-errores/ (Accedido: 28 Diciembre 2024). - [29]
Vectara. Why building your own RAG stack can be a costly mistake [Internet]. 2024. Disponible en: https://www.vectara.com/blog/why-building-your-own-rag-stack-can-be-a-costly-mistake (Accedido: 28 Diciembre 2024). - [30]
Vectara lanza una Puntuación de Coherencia Fáctica basada en un modelo de evaluación de alucinaciones de Hughes perfeccionado para mejorar la transparencia en las respuestas de GenAI [Internet]. 2024. Disponible en: https://www.businesswire.com/news/home/20240326338865/es/ (Accedido: 28 Diciembre 2024). - [31]
Hughes S, Bae M, Li M. Vectara Hallucination Leaderboard [Internet]. 2023. Disponible en: https://github.com/vectara/hallucination-leaderboard (Accedido: 28 Diciembre 2024). - [32]
Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Medical Education Online. 2024; 29: 2330250. https://doi.org/10.1080/10872981.2024.2330250. - [33]
Crotty E, Singh A, Neligan N, Chamunyonga C, Edwards C. Artificial intelligence in medical imaging education: Recommendations for undergraduate curriculum development. Radiography (London, England: 1995). 2024; 30 Suppl 2: 67–73. https://doi.org/10.1016/j.radi.2024.10.008.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
