Abstract
Artificial Intelligence (AI) has proven to be a valuable tool in the medical field, namely in diagnosing psychiatric disorders. Results show upticks in accuracy and efficiency when compared to human physicians, potentially providing a replacement. Yet, humans remain indispensable, as empathy remains a necessary aspect of mental healthcare, and AI models have not mimicked it. This paper argues that a hybrid approach would be most effective in diagnosing mental disorders, harnessing the advantages of AI and the human brain to diagnose and form a treatment plan more effectively. First, I will discuss AI’s advantage in mitigating bias, which stems from human-taught models and leads to an alarming number of misdiagnoses. Next, I will discuss AI’s advantage in processing large amounts of information and actively monitoring patients, outperforming the limits of human processing. Then, I will discuss the role of empathy in patient care and how AI models cannot recreate human touch, which ultimately is necessary for best results. Ultimately, this paper will show how a hybrid approach to mental health diagnoses would be the most beneficial.

Introduction
Psychiatric disorders are now responsible for the largest proportion of the global burden of disease (Sun et al., 2023). Psychiatric diagnoses, as described in the Diagnostic and Statistical Manual, 5th Edition (DSM-5), are generally idiopathic diagnoses of exclusion based on sign and symptom clusters that cause “clinically significant distress or impairment in social, occupational, or other important areas of functioning (Rodgers & Weinstein, 2017).”
Advancements in medical practice and technology have improved the effectiveness of diagnosing patients. The relevant form of technology for this paper, artificial intelligence, has increasingly been used in the medical field. Its use in psychiatric diagnoses has been evolving significantly since the initial efforts in the 1960s. Joseph Weizenbaum’s ELIZA (1966), which simulated a psychotherapist through fundamental text interactions, sparked interest in AI’s potential for mental health applications. However, it did not make diagnoses (Weizenbaum, 1966). By the 1990s and 2000s, advances in machine learning allowed AI models to be trained on small psychiatric datasets (Mitchell, 1997). The 2010s marked a significant leap forward with the rise of deep learning and the availability of large-scale healthcare data (LeCun et al., 2015). Today, AI plays a crucial role in decision support tools that assist mental health professionals in diagnosing psychiatric disorders, with applications like AI-driven virtual therapists and chatbots (Fitzpatrick et al., 2017). However, human clinicians remain essential in interpreting AI findings and making final diagnoses.
Whether a human or an artificial intelligence model is involved in diagnosing psychiatric disorders, there have been misdiagnoses, which are consequential and result in “a fatal chain of wrong decisions” (Mendel et al., 2011). In the area of patient safety, recent attention has focused on why such errors occur (Croskerry 2003). In this paper, I will examine the differences between human cognition and artificial intelligence in bias, data processing, and the nuance of the human brain. I will argue that these differences suggest that a hybrid approach, combining human insight with artificial intelligence, could lead to more accurate psychiatric diagnoses than relying solely on the human mind or AI.

Biases in both artificial intelligence and the human brain
To discuss the role of bias in AI and the human brain, it is important first to define bias. Cognitive biases are unconscious and systematic errors in thinking that occur when people process and interpret information in their surroundings, influencing their decisions and judgments (Kahneman et al., 1982). Though Kahneman was referencing the human brain specifically, cognitive bias is a phenomenon that is found in both tools.
The root of cognitive bias within artificial intelligence models is, in fact, the human brain. The most prominent artificial intelligence models are trained using human data, which is often flawed or skewed. Dr. Ted James, also a professor at Harvard University, talked about an AI model used across several U.S. mental health systems that exhibited bias by prioritizing healthier, white patients over sicker black patients for additional care management because it was trained on cost data, not care needs in his article “Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care.” Though bias in AI has proven to be a problem, James is hopeful that the recognition of this problem will lead to change, as AI can be improved through “involving ethicists, sociologists, patient advocates, and other stakeholders” (James).
Recognizing that AI’s flaws reflect our biases provides an opportunity for introspection and improvement in clinical practices. He argues that AI can be made bias-free by exposing it to diverse and comprehensive data. “This means including more comprehensive datasets that accurately reflect the demographic diversity of the population. For example, increasing the representation of minority groups in clinical trials and health records can help train more equitable AI systems,” which can lead to more accurate diagnoses. While humans can be trained in similar ways, AI can process much more information to combat the biases, which will be discussed in more depth later. At the same time, some research shows that unconscious bias in humans can be combated through strategies such as acknowledgment of the bias and steps taken toward mitigating the bias in the future (Atkins, 2023).
Clearly, bias is a problem in diagnosing mental illness that stems from humans.
Confirmation bias is a specific type of cognitive bias that involves favoring information that confirms preexisting beliefs or hypotheses while ignoring evidence that is contrary to the preexisting belief. When examining a patient, it is possible for a physician to have preexisting biases of all kinds, which could, in the worst case, lead to an incorrect diagnosis.
To study whether psychiatrists and medical students are prone to confirmation bias and whether confirmation bias leads to poor diagnostic accuracy in psychiatry, Mendel et al., 2011 presented an experimental decision task to 75 psychiatrists and 75 medical students. Psychiatrists conducting a confirmatory search made a wrong diagnosis in 70% of the cases compared to 27% or 47% for a disconfirmation or balanced information search (students: 63, 2, and 27%). Participants who chose the wrong diagnosis were also prescribed different treatment options than those who chose the correct one. This study shows that humans aren’t always able to treat each medical case separately. The human mind relies more on keeping the previous diagnosis right, even if the data suggest it isn’t. Conversely, AI is meant to provide objective data and would not necessarily have to have the previous diagnosis “in mind.” The problems plaguing the human mind in diagnosis could be programmed out of AI.
There are further biases than simply confirmation bias, and they pose significant challenges. Jonathan Howard, a psychologist, draws attention to hindsight bias, stating that, “knowing the outcome of an event profoundly influences people’s perception of the events that preceded it.” Hindsight bias can lead to misdiagnosis by causing clinicians to overestimate the predictability of a diagnosis after knowing the outcome, overlook uncertainty or alternative diagnoses, and misinterpret past symptoms or test results as more indicative of the final diagnosis than they truly were.
AI, when properly trained with enough variety in data and using techniques that account for uncertainty, can outperform humans by recognizing the outcomes that are the most certain. AI isn’t driven by emotional needs, so it avoids many of the psychological traps that lead to hindsight bias in humans.
Overall, bias is a problem for both Artificial Intelligence and the human brain when diagnosing, though many of the issues for AI stem from the biases in humans (through model training using human data). AI is better equipped to take on the deficits of bias in its ability to train itself with new data efficiently in ways that humans have a limited ability to do.

Data Processing
This section will explore the role of data processing in producing psychiatric diagnoses, the differences in the data processing ability of humans and AI models, and, ultimately, why AI is able to outperform humans in this area. As discussed above, bias also plays a key role in data processing, as AI models should be able to process information without significant bias, which would prove to be more consistent and accurate.
Artificial intelligence models appear to be making significant advances in predicting and diagnosing mental disorders; AI is highly effective at processing large datasets and identifying patterns across various features, such as visual, acoustic, verbal, and physiological inputs. Its ability to analyze data without bias allows it to make predictions and diagnoses consistently, which can sometimes surpass human efforts, particularly when large volumes of data are involved (Yan, Ruan, & Jiang, 2022).
AI models such as Tess, Brainify.AI, Woebot, and Ellie are advancing mental health care by offering personalized treatment plans based on large datasets. These models gather and process data from user interactions, emotional feedback, and physiological markers, such as heart rate and sleep patterns, to adapt and offer timely and accurate interventions. AI models excel at analyzing large volumes of data quickly and efficiently, allowing them to identify patterns and correlations across numerous data sources, which would be difficult for a human to process manually.
As Mariam Khayretdinova, CEO of Brainify.AI, points out, “AI’s ability to process and analyze large datasets is particularly beneficial in psychiatry, where the complexity of mental health conditions often requires extensive data analysis to provide accurate diagnoses and personalized treatment plans.” Unlike humans, AI can continuously process vast datasets in real time, making it especially effective for tracking ongoing mental health changes, such as fluctuations in mood or behavior.
Researchers from IBM and the University of California have highlighted the potential of AI to perform mental health diagnoses at high accuracy rates—ranging from 70% to 92%—depending on the data quality and training models used. This is a remarkable achievement; AI systems are more adept at detecting patterns within large datasets, including social media activity or medical records, that might go unnoticed by human practitioners.
A study by Shen et al. (2019) found that AI’s ability to analyze large datasets in psychiatric diagnoses matched or even exceeded the diagnostic accuracy of human clinicians on tasks like sensitivity, specificity, and false-positive. AI’s success in processing large datasets suggests it can function as an incredibly effective support tool for clinicians, assisting in making more informed decisions.
AI does not always process data in a way that is fundamentally different from the way that a human would. The large differences arise in AI’s processing power, which cannot be matched. Though a human would be able to look through a patient’s social media or monitor data in real time, AI is able to do these tasks in a fraction of the time that it would take a human.

Empathy and the Hybrid Model
While Artificial Intelligence models can overcome humans in areas such as bias and data processing ability, the human mind is not completely beating. Artificial intelligence, at least in its current form, cannot feel empathy, or at least not in the way that humans can. AI struggles with the adaptability and nuanced understanding that human cognition brings to psychiatric diagnoses. Mental disorders like depression are highly subjective and influenced by individual experiences, cultural factors, and complex emotional states. While prone to confirmation bias, human clinicians have the advantage of empathy and contextual reasoning, enabling them to adapt their approach to each patient’s unique circumstances (Mendel et al., 2011). Conversely, AI lacks the emotional intelligence to account for these subtleties, and its reliance on data-driven models is limited by small sample sizes, artificial data environments, and oversimplified diagnostic categories (Yan et al., 2022).
Recent research from Rubin et al. highlights the role of empathy and how it could be used in mental diagnoses. Their work, “Considering the Role of Human Empathy in AI-Driven Therapy,” discusses that while AI can be trained to assess emotional states based on facial expressions, it cannot “partake in emotional experiences…therefore, regardless of how eloquently it crafts a response to seem like it shares an emotional experience, this response will be untruthful, as it does not share any experience” (Rubin et al.). On the side of the patient, they can feel empathy from human care providers in ways that are inauthentic with AI. Studies have shown that empathy has been consistently linked to positive outcomes in treatment (Rubin et al). Human empathy is an effective and useful tool in creating a plan and path for care for a patient, which AI cannot create.
A hybrid model that combines AI with human expertise could be considered a more effective approach for psychiatric diagnoses, as it leverages the strengths of both systems. AI excels at processing large datasets, identifying patterns, and providing evidence-based insights quickly and accurately. Theoretically, it could do this without bias, or at least less bias than a human physician. This capability is invaluable in psychiatry, where conditions often require extensive analysis of patient data, medical history, and even family dynamics (Sharma et al., 2023; Sun et al., 2023). However, AI lacks the empathy, emotional intelligence, contextual understanding, and ethical reasoning that human clinicians provide, particularly in complex mental health cases such as PTSD or adolescent mental health (Lee et al., 2021).
Studies show that AI can enhance human decision-making, as seen with tools such as Hailey, which augment peer supporters’ empathy and improve the quality of interactions (Sharma et al., 2023). By integrating AI’s data-processing capabilities with the nuanced understanding and emotional intelligence of human clinicians, this hybrid model fosters more accurate diagnoses, tailored treatment plans, and emotional support—ultimately leading to better patient outcomes (Lee et al., 2021; Sun et al., 2023).
Conclusion
Thus, while AI can provide substantial support in managing the data-heavy aspects of psychiatric practice, human clinicians remain indispensable for offering the emotional connection and complex contextual understanding that are crucial for effective mental health care. The integration of Artificial Intelligence into mental healthcare presents a transformative opportunity to enhance diagnostic accuracy, personalize treatment, and improve service accessibility. By establishing regulatory frameworks and prioritizing responsible development, AI and the human mind, in collaboration, has the potential to revolutionize mental health services, creating more effective and compassionate approaches to supporting individual and community mental well-being.
References
Atkins, K. (2023, February 1). Unconscious bias and the public servant: What can we do to overcome unconscious bias? NIH Equity, Diversity, and Inclusion.
Croskerry, P. (2003, August). The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med, 78(8), 775–780. https://doi.org/10.1097/00001888-200308000-00003
Howard, J. (2019). Cognitive errors and diagnostic mistakes: A case-based guide to critical thinking in medicine. Springer.
James, T. A. (2024, September 24). Confronting the mirror: Reflecting on our biases through AI in health care. Harvard Medical School Postgraduate Education. https://postgraduateeducation.hms.harvard.edu/trends-medicine/confronting-mirror-reflecting-our-biases-through-ai-health-care
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H. C., Paulus, M. P., Krystal, J. H., & Jeste, D. V. (2021). Artificial intelligence for mental health care: Clinical applications, barriers, facilitators, and artificial wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6(9), 856–864. https://doi.org/10.1016/j.bpsc.2021.02.001
Mendel, R., Traut-Mattausch, E., Jonas, E., et al. (2011). Confirmation bias: Why psychiatrists stick to wrong preliminary diagnoses. Psychological Medicine, 41(12), 2651–2659. https://doi.org/10.1017/S0033291711000808
Palmer, C. E., Marshall, E., Millgate, E., Warren, G., Ewbank, M. P., Cooper, E., Lawes, S., Bouazzaoui, M., Smith, A., Hutchins-Joss, C., Young, J., Margoum, M., Healey, S., Marshall, L., Mehew, S., Cummins, R., Tablan, V., Catarino, A., Welchman, A. E., & Blackwell, A. D. (2024). Combining AI and human support in mental health: A digital intervention with comparable effectiveness to human-delivered care. medRxiv. https://doi.org/10.1101/2024.07.17.24310551
Rodgers, J. J., & Weinstein, B. L. (2017). Psychiatry in neurology. In J. S. Kass & E. M. Mizrahi (Eds.), Neurology secrets (6th ed., pp. 403–427). Elsevier. https://doi.org/10.1016/B978-0-323-35948-1.00029-2
Rubin, M., Arnon, H., Huppert, J. D., & Perry, A. (2024). Considering the role of human empathy in AI-driven therapy. JMIR Mental Health, 11, e56529. https://doi.org/10.2196/56529
Shen, J., Zhang, C., Jiang, B., Chen, J., Song, J., Liu, Z., He, Z., Wong, S., Fang, P., & Ming, W. (2019). Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Medical Informatics, 7(3), e10010. https://doi.org/10.2196/10010
Sun, J., Dong, Q.-X., Wang, S.-W., Zheng, Y.-B., Liu, X.-X., Lu, T.-S., Yuan, K., Shi, J., Hu, B., Lu, L., & Han, Y. (2023). Artificial intelligence in psychiatry research, diagnosis, and therapy. Asian Journal of Psychiatry, 87, 103705. https://doi.org/10.1016/j.ajp.2023.103705
Yan, W. J., Ruan, Q. N., & Jiang, K. (2022). Challenges for Artificial Intelligence in Recognizing Mental Disorders. Diagnostics (Basel, Switzerland), 13(1), 2. https://doi.org/10.3390/diagnostics13010002

Leave a comment