Artificial Intelligence: Utopia or Dystopia?

Guest Contributor: Fengyan Deng
PhD in Nursing student, Texas Woman’s University

Nursology.net Blogs on AI

Artificial Intelligence (AI) has been penetrating almost every aspect of human life, though without conscious awareness. Examples can span from personal experiences using Google Maps/chatbots for customer service to various industries. One industry example is humanoid robots undertaking human tasks to improve efficiency, safety, and operational workflows. From 1993 to 2014, the utilization of 180,000 robots led to a decline of approximately 720,000 jobs per robot. Robotics increases productivity by performing repetitive tasks continuously without fatigue and enhances safety by handling hazardous tasks (Wei & Watson, 2025).        

Artificial Intelligence is the technology that enables computers and machines to simulate human learning and perform tasks that would usually require human intelligence, such as learning, reasoning, problem-solving, and decision-making. Machine Learning (ML) is a form of AI that enables machines to learn from data without being explicitly programmed. Machine learning algorithms include supervised, unsupervised, semi-supervised, and reinforcement learning. ML algorithms can analyze data, learn from the data, and make predictions or decisions based on that learning. Deep learning is a type of ML that trains artificial neural networks with multiple layers to recognize patterns in data. It is used in image and speech recognition, natural language processing, and other applications. Other AI subsets include robotics, computer vision, and expert systems designed to mimic the decision-making abilities of a human expert in a particular domain. The application of AI impacts finance, healthcare, transportation, and entertainment. The confluence of AI with other technologies, including the Internet of Things (IoT), blockchain, and quantum computing, continues. As AI systems become increasingly powerful and prevalent across many applications, the optimistic, boundless utopia of AI and its ominous, unintended dystopian outcomes require examination (Bellini et al., 2022; Sarker, 2021; Cools, Baldwin & Opgenhaffen, 2024).             

Source:AI-generated image: a fake image of the Pope wearing a white puffer jacket.
Pablo Xavier; Annotation by NPR

Utopia and dystopia are reflections of human life in the literature by writers throughout the centuries. The concept of “utopia” can be traced back to the publication of Thomas More’s Utopia (1516). The book described an ideal place as free from problems. The term “dystopia” was introduced by the English philosopher J.S. Mill, inspired by Thomas More’s novel utopia. Dystopia refers to an imperfect place dominated by evil. The concept of a utopian society as a perfect model was questioned and challenged throughout the centuries. For example, the Industrial Revolution was an important factor in humanity’s socioeconomic progress. It fulfilled human beings’ basic needs through science and technology and created wealth, but failed to ensure their equal distribution, leading to poverty. The destructive effects of the Industrial Revolution challenged the promise of utopian models. Utopian writers pictured a future in which lifestyle is improved by scientific and technological development; dystopian works concern the emergence of uncertain future societies characterized by the abuse of power, exploitation, and the abuse of technology. Some writings portray utopian and dystopian themes in Science Fiction to express humans’ hopes and fears about the future, linking to science and technology (Cools, Baldwin & Opgenhaffen, 2024)  (https://westerneuropeanstudies.com/index.php/2/article/view/1748/1188224).                                           

As we are leaping towards an era dominated by unprecedented technological advancements of AI, addressing AI’s utopian effect and dystopian concern becomes increasingly pertinent.Generative AI (GAI) is a type of artificial intelligence that can generate text, images, and videos in response to a user prompt. Large language models (LLMs), including ChatGPT, Copilot, Gmini, and Llama (Meta), can perform a variety of language-based tasks, such as generating, summarizing, and translating text. However, the rise of GAI  has introduced AI slop. The term refers to flooding platforms with low-quality, machine-generated content, ranging from generic essays and clickbait blogs to mysterious images and synthetic videos. The LLMs’ image generators and audio synthesis tools enable anyone to create polished essays, articles, images, or songs at trivial cost and unprecedented speed, raising epistemic concerns. False or low-value information diminishes trust in knowledge infrastructures and undermines human creativity. The introduction of false, misleading, or trivial information erodes the reliability of knowledge systems and contributes to epistemic pollution. In academic contexts, this is demonstrated by fabricated citations, generic essays, and automated reviews (Madson & Puvt,  2025).                                       

Artificial intelligence may revolutionize various aspects of health care, such as diagnosis. Deep learning can identify key patterns in disease detection across large datasets. Such as feeding a large dataset of mammograms into an AI system for breast cancer diagnosis, diagnosing melanoma, detecting diabetic retinopathy, an electrocardiogram (EKG) abnormalities, and analyzing medical images, X-rays, CT scans, and MRIs.                                         

Predictive analytics significantly relies on modeling, data mining, ML algorithms, and other technologies to analyze data and develop predictive models to improve patient outcomes and reduce costs. For example, by analyzing medical history, demographics, and lifestyle factors, the predictive model can identify patients at risk of developing chronic diseases, such as endocrine or cardiac conditions, and target interventions to prevent or treat them. There are numerous direct-to-consumer (DTC) AI tools used by patients or individuals with health or wellness concerns. Examples include an app that uses a smartphone camera to help an individual self-diagnose a dermatologic condition and an algorithm that uses biosensor data from a smartwatch to detect falls or arrhythmias.There are currently more than 350,000 mobile health apps that are embedded with AI. Three of 10 adults worldwide have used a mobile health app, and the market is already over $70 billion annually (Alowais et al., 2023; Angus et al., 2025)                    

There are considerable dystopian concerns about AI in healthcare. Examples include the dehumanization of patient care by reducing patients to mere “data points”. Patients’ unique emotional and psychological needs are ignored, favoring standardized, algorithm-based treatment plans. AI bias issues spread across all medical fields. The accuracy and reliability of AI algorithms depend heavily on the quality and representativeness of the data used to train them. If the data used to train an AI model is biased, the model will produce biased results, with severe consequences, including inaccurate diagnoses and treatment recommendations. It could also perpetuate discrimination and exacerbate health disparities, resulting in unequal treatment of patients based on factors such as race, gender, and socioeconomic status (Auino et al., 2023; Chin et al., 2023; Norori et al., 2021). 

The hallucinations of LLMs refer to outputs that are factually incorrect, logically contradictory, or ungrounded in reliable sources. Medical hallucinations can arise within specialized tasks such as diagnostic reasoning, therapeutic planning, or interpretation of laboratory findings. These inaccuracies have immediate implications for patient care. Unrecognized errors risk delaying proper interventions or redirecting care pathways. The impact of medical hallucinations is far more severe. Errors in clinical reasoning or misleading treatment recommendations can directly harm patients by delaying proper care or leading to inappropriate interventions that undermine patient safety.

The LLM’s hallucination can be found in other tasks such as clinical documentation, clinical note generation, consultation, and summarization. It is known that LLMs generate information that is not present in the input data or omit relevant information from the original document. The inaccuracies in the document summarisation task can introduce misleading details into transcribed conversations or summaries, potentially delaying diagnoses and causing unnecessary patient anxiety. The problem of hallucinations has previously been attributed to data quality during model training and to model training methodology. Recent findings recognized that hallucination may be an intrinsic, theoretical property of all LLMs. The prevalence, causation, and evaluation of hallucinations in medicine’s impact on patients’ safety, remain an open question (Asgari et al., 2023).

AI integration into nursing improves efficiency, accuracy, and patient outcomes by automating routine, repetitive, data-intensive, and administrative tasks. For example, the TUG robots at Dartmouth Hitchcock Medical Center autonomously navigate clinical environments to deliver medications efficiently and securely. Patient mobility assistance automation reduces both patient risk and the physical demands on nurses. For example, Robotic lifts and transfer devices help with positioning and mobility tasks, and Smart beds equipped with pressure sensors and automated positioning features improve patient comfort and prevent pressure ulcers. The Robotic exoskeletons assist patients with mobility impairments during rehabilitation,  promoting independence while reducing nurse burden. Automated, streamlined data measurement, recording, retrieval, and sharing facilitate speedy documentation and information exchange. Automated scheduling systems optimize shift planning (Petito et al., 2025). 

Despite the technology’s efficiency, there is concern about the dehumanization of care and the human role in caring by nursing professionals. The robotic medication carts can enhance efficiency, but patients perceive these interactions as impersonal. The emotional AI tools of chatbots and virtual assistants is the integration of AI and affective computing designed to perceive, learn from, and interact with human emotions by analyzing a range of data related to words, images, facial expressions, gaze direction, gestures, voices, and physiological signals, such as heart rate, body temperature, respiration, and skin conductivity. From these real-time emotion data, emotional AI can adapt its responses to users’ emotional cues, providing empathetic and supportive interactions that offer immediate strategies, interactive exercises, and simulations for managing emotional states. However, AI simulating empathy is limited to emotional cues in a narrow range of data and responds with preprogrammed algorithms designed to mimic empathetic behavior and deliver comforting messages, creating an illusion of understanding and support. AI-driven tools can detect emotional patterns, such as frustration or sadness, and adjust responses accordingly, giving the appearance of empathy. AI algorithm techniques do not constitute an authentic human-to-human relational connection, and they lack genuine concern and true compassionate presence. Authentic human empathy requires sensitivity, deep understanding, and a genuine desire to alleviate distress and to share another person’s emotional experience, as provided by the nursing profession. AI can provide data-driven insights into patient needs. However, it cannot replace the depth of human relational care, emotional resonance, or intentionality inherent in the nursing profession (Thakkar, Guptia, & De Sousa, 2024; Wei & Watlson, 2025).

Source: ChatGPT
Rushed, misleading and just plain bad AI content is everywhere – why?
https://www.techradar.com/computing/artificial-intelligence/ai-slop-is-taking-over-the-internet-and-ive-had-enough-of-it

As we move towards the irresistible era of AI, will AI lead to a utopian or a dystopian outcome? The AI has undeniably bestowed us with unprecedented utopian advancements, such as increased efficiency, accuracy, safety, connectivity, improved health care interventions, enhanced communication networks, and increased access to information. However, the potential dystopian outcomes of job displacement, epistemology pollution, algorithmic bias, AI hallucination, and dehumanization in health care necessitate careful and deliberate navigation. The optimistic utopia of AI progress and the ominous dystopia of unintended AI outcomes are dynamic. This dynamic can be controlled and steered toward utopian outcomes through continual evaluation, adaptation, and a collective commitment to fostering a future trajectory in which AI innovation aligns with human values (Muhammad, 2024). 

As AI increasingly integrates into healthcare, the nursing profession and discipline need a philosophical and ethical foundation to safeguard nursing’s core values. Watson’s Unitary Caring Science and Theory of Transpersonal Human Caring offer a guiding framework for sustaining professional human caring amid the advent of AI technologies, ensuring that human caring remains central to nursing. Neuman’s holistic system theory guides nursing professionals in providing holistic care to patients (Watson, 2008; Neuman & Fawcett, 2011).

References

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Bin Saleh, K., Badreldin, H. A., Al Yami, M. S., Al Harbi, S., & Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1), 689–z. https://10.1186/s12909-023-04698-z

Angus, D. C., Khera, R., Lieu, T., Liu, V., Ahmad, F. S., Anderson, B., Bhavani, S. V., Bindman, A., Brennan, T., Celi, L. A., Chen, F., Cohen, I. G., Denniston, A., Desai, S., Embí, P., Faisal, A., Ferryman, K., Gerhart, J., Gross, M., . . . JAMA Summit on AI. (2025). AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence. Jama, 334(18), 1650–1664. https://10.1001/jama.2025.18490

Aquino, Y. S. J., Rogers, W. A., Braunack-Mayer, A., Frazer, H., Win, K. T., Houssami, N., Degeling, C., Semsarian, C., & Carter, S. M. (2023). Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. International Journal of Medical Informatics, 169, 104903. https://10.1016/j.ijmedinf.2022.104903

Asgari, E., Montaña-Brown, N., Dubois, M., Khalil, S., Balloch, J., Yeung, J. A., & Pimenta, D. (2025). A framework to assess clinical safety and hallucination rates of LLMs for medical text summarisation. NPJ Digital Medicine, 8(1), 274–7. https://10.1038/s41746-025-01670-7

Bellini, V., Valente, M., Gaddi, A. V., Pelosi, P., & Bignami, E. (2022). Artificial intelligence and telemedicine in anesthesia: potential and problems. Minerva Anestesiologica, 88(9), 729–734. https://10.23736/S0375-9393.21.16241-8

Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colón-Rodríguez, C. J., Dullabh, P., Duran, D. G., Fair, M., Hernandez-Boussard, T., Hightower, M., Jain, A., Jordan, W. B., Konya, S., Moore, R. H., Moore, T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M., . . . Ohno-Machado, L. (2023). Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Network Open, 6(12), e2345050. https://10.1001/jamanetworkopen.2023.45050

Cools, H., Baldwin, V. G., & Opgenhaffen, M. (2024). Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism, 25(1), 3–21. https://10.1177/14648849221122647

Distinctions between utopia, antiutopia and dystopia(2024). Western European Journal of   Linguistics and Education, 2(11), 195-197. Retrieved from   https://westerneuropeanstudies.com/index.php/2/article/view/1748

Medical Hallucinations in Foundation Models and Their Impact on Healthcare (Nov. 2025). Retrieved from https://arxiv.org/abs/2503.05777

Madsen, Dag Øivind and Puyt, Richard W., The 7Vs of AI Slop: A Typology of Generative  Waste (October 02, 2025). Available at SSRN: https://ssrn.com/abstract=5558018 or http://dx.doi.org/10.2139/ssrn.5558018 Retrived from https://saus.com.pk/sjics/index.php/sjics/article/view/9

Muhammad A (2024). Technotopia or Dystopia? Exploring the future trajectory of human and

technological coexistence. SAUS-Journal of IT and Computer Sciences (SJICS). Retrieved from https://saus.com.pk/sjics/index.php/sjics/article/view/9

Neuman, B., & Fawcett, J. (2011). The Neuman systems model (5th ed.). Pearson.

Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (New York, N.Y.), 2(10), 100347. https://10.1016/j.patter.2021.100347

NUY libraries (Last Updated: October 7, 2025). Generative AI and Large Language Models

(LLMs). Retrieved from https://guides.nyu.edu/chatgpt

Pepito, J. A., Acaso, N. J., Merioles, R., & Ismael, J. (2025). Opportunities, Challenges, and Future Directions for the Integration of Automation in Nursing Practice: Discursive Study. JMIR Nursing, 8, e72674. https://10.2196/72674

Sarker, I. H. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(3), 160–x. Epub 2021 March 22. https://10.1007/s42979-021-00592-x

Thakkar, A., Gupta, A., & De Sousa, A. (2024a). Artificial intelligence in positive mental health: a narrative review. Frontiers in Digital Health, 6, 1280235. https://10.3389/fdgth.2024.1280235

Virginia Tech Engineer (2023 Fall): AI—The good, the bad, and the scary. Retrived from   https://eng.vt.edu/magazine/stories/fall-2023/ai.html

Watson, J. (2008). Nursing: The philosophy and science of caring (Rev. ed.). University Press of Colorado.

Wei, H., & Watson, J. (2025). Preserving Professional Human Caring in Nursing in the Era of Artificial Intelligence. ANS.Advances in Nursing Science, https://10.1097/ANS.0000000000000573

About Fengyan Deng

Fengyan Deng, DNP, certified register nurse anesthetist (CRNA), currently pursuing PhD in nursing at TWU. AI is pressing and challenging issue we face today. Some people embrace it with maniac excitement and others with fear. The age of AI is irresistible. We should discern its’ boundless benefit brought to humans as well as its’ potential dystopian outcomes.

One thought on “Artificial Intelligence: Utopia or Dystopia?

  1. Thank you for this report. I would like to point out that there is a large and growing compendium of research regarding appropriate use of AI including robots in Nursing, much of it from the research team of Tetsuya Tanioka from Tokushima University in Japan. That research addresses the various dystopian concerns you have mentioned. Another nursing theorist who addresses both utopian AND dystopian concerns is Locsin, whose theory of Technological Competency as Caring in Nursing has a direct bearing on these concerns. Most nursing scholars writing in this area are insistent that nurses and other healthcare professionals be involved from start to finish in any design and implementation decisions regarding use of AI, including robots.

Leave a Reply