09/05/2022
Newsroom in Diario da Sade
Autodiagnostic failure
Machine learning has helped us understand the complex patterns of brain activity. Scientists then associate these patterns with human behaviors such as working memory, traits such as impulsivity, and disorders such as depression.
With these tools, scientists can model these relationships; The theory is that these models are then used to make predictions about the behavior and health of each individual patient.
But this only works if the models represent everyone, which is necessary so that they can be applied to any individual. The big problem that many studies have shown is that this is not the case: Whatever model is used, there is always a large portion of ordinary people for whom the model simply does not fit.
So researchers at Yale University (USA) decided to study who these models tend to fail, why this happens and what can be done about it.
Professor Abigail Greene, the team leader, summed up, “If we want to move this kind of work into a clinical application, for example, we need to make sure that the model applies to the patient sitting in front of us.”
Models without generalizability
Researchers are interested in how models can provide a more accurate psychological characterization, which they believe can be achieved in two ways.
The first is to increase the classification of the sick population. The diagnosis of schizophrenia, for example, includes a range of symptoms and can be very different from person to person. A deeper understanding of the neural underpinnings of schizophrenia, including its symptoms and subcategories, may allow researchers to group patients in a more precise way.
Second, there are features such as impulsivity that a variety of diagnoses share. Understanding the neural basis of impulsivity can help clinicians target these symptoms more effectively, regardless of the associated disease diagnosis.
But, first of all, the models must be generalizable to everyone. When the team set out to test, the models correctly predicted how well most people would get these questions. However, for some people, they were proven not only inaccurate, but wrong, as they wrongly predicted that people would score poorly when they actually scored well, and vice versa.
The research team then analyzed patients whose models failed to correctly classify them.
“We found there was consistency: the same individuals were misclassified in the tasks and in the analyses,” Green said. “And people who were misclassified in one data set have something in common with people who were misclassified in another data set. So there’s really something important about misclassification.”
Artificial intelligence reproduces stereotypes
Next, the researchers looked to see if these similar misclassifications could be explained by differences in the brains of these individuals. But there were no consistent differences.
Instead, they found that misclassifications were associated with sociodemographic factors, such as age and education, and clinical factors, such as symptom severity.
Ultimately, they concluded that the models not only reflect cognitive ability; They reflect more complex “features” – a kind of combination of cognitive abilities and different social, demographic and clinical factors, Green explained.
The impact of this discovery is greater than it appears, opposing the growing trend in the use of artificial intelligence for medical diagnosis and, above all, psychiatric and psychiatric diagnosis: these models end up reproducing what the initial diagnoses and assessments attribute to patients who were originally analyzed, when the model was under development .
The researcher concluded that “models failed anyone who did not fit the stereotype.”
Article: Brain phenotype models fail for individuals who challenge stereotype models
Authors: Abigail S. Greene, Shailene Shen, Stephanie Noble, Corey Horen, C. Alice Hahn, Jagrette Arora, Fuse Tukoglu, Marisa N. Spahn, Carmen; Karen, Daniel S.; Barron, Gerard Sanakura, Vinod H Srihari, Scott W. Woods, Dustin Shainost, R. Todd Constable
Post: Nature
DOI: 10.1038 / s41586-022-05118-w
“Writer. Analyst. Avid travel maven. Devoted twitter guru. Unapologetic pop culture expert. General zombie enthusiast.”