2025-10-28: Summary of "As an Autistic Person Myself:" The Bias Paradox Around Autism in LLMs

Figure 1: A sample prompt used in generating persona using ChatGPT (Figure 1 in  Park et al.)

Introduction: The Unseen Biases of Our AI Companions

Large Language Models (LLMs) such as ChatGPT have become ubiquitous in our lives. We rely on them as impartial conveyors of facts for everything ranging from composing emails to solving complex queries. However, if we put these AIs in the hot seat and question them about identity, which is a very human thing, what would be the outcome? What if we ask them about autism?

In the recent study “‘As an Autistic Person Myself:’ The Bias Paradox Around Autism in LLMs” (CHI 25), Park et al. went deep to understand what ChatGPT thinks about the condition of being autistic. The results they got weren’t just a malfunction of the system; rather, they were an indication of the way people behave towards the issue of neurodiversity,  going back and forth between one extreme and the other. The artificial mind is going through the so-called "bias paradox," which is a perpetual, inside struggle between its nature of inclusion and acceptance and the stereotypes deeply etched in its training data that come from human society.

One of the most strange phenomena accompanying this antagonism? By trying to appear as if they understand the experience of humans, the AI sometimes identifies itself as an autistic person and moreover employs the phrase "As an autistic person is..." Just this one utterance is a peek into the enigmas of the model's contradictory and complex nature.

Method

The​‍​‌‍​‍‌​‍​‌‍​‍‌ study utilized a mixed-methods design to investigate biases of LLMs against autistic people through quantitative and qualitative analyses to detect implicit and explicit bias markers. As a result, the research focused on GPT-3.5 Turbo alias ChatGPT because it is the most popular model and has been reported to produce more implicit bias than other LLMs. The main method was persona prompting, which is a way of bias elicitation by assigning a specific role or persona to an LLM.

The data generation plan was implemented via a Python script that sent requests to the GPT-3.5 API (not the ChatGPT interface, but the same underlying model). In the first step (Prompt 1), ChatGPT was asked to invent three characters within a virtual world setting and to assign the characters attributes such as name, age bracket, occupation, nature of the character, character traits, daily routine, lifestyle, and place of living. The occupations were invented by the model based on the facilities of the virtual world. This initial prompt was crafted to be minimal so as to reveal the LLM inherent data rather than new empirical data.

In the second step (Prompt 2), ChatGPT was instructed to pick one out of the three characters to be autistic, provide the reasons for the selection, and change the chosen character's description if necessary. The session was refreshed to get rid of the trace of previous answers after the two prompts. The method was consistently applied in 800 trials in total. The trials were split into 8 different scenarios (100 repetitions each) to find out the effect of gender and age. Four gender compositions (three females; two females and one male; one female and two males; three males) and two age groups (18–35 and 18–65) were used to determine it.

The data analysis included two phases:

1. Quantitative​‍​‌‍​‍‌​‍​‌‍​‍‌ analysis concentrated on demographic biases and employed various statistical tests like the Chi-Square test and t-tests to investigate the impact of gender, age, and job type on ChatGPT's selection of the autistic ​‍​‌‍​‍‌​‍​‌‍​‍‌agent.

2. Qualitative​‍​‌‍​‍‌​‍​‌‍​‍‌ analysis included a thematic analysis of a randomly selected 25% subset of the responses (n=200) aimed at identifying the specific biases and stereotypes related to the use of the term autism in the GPT model. This brought out the "bias paradox," indicating the model's struggle to a certain extent between encouraging representation and augmenting negative stereotypes. ‌ ‍ ‍

The AI Stereotypes Autistic People as Male Tech Workers

While AI was asked to pick an autistic individual from a group of personas, it became clear that the technology was turning the spotlight onto one of the most persistent societal prejudices [Figure 1]. The AI was not fabricating stereotypes; rather, it was reproducing ours. Modeling results showed the likelihood of assigning the label autism to a male character as 72% of the scenarios.

These preconceived ideas about gender relied on the influence of stereotypes about the professions. The paper finds that the characteristics of male autistic agents were often linked to such technical roles as "Software Engineer" and "Data Analyst." However, if a female agent was identified as autistic, she was more probably given "caregiving or supportive roles" like "Nurse" [Figure 2].


Figure 2: Top 10 most chosen jobs showing Male dominate most occupations—especially software roles (Figure 3 in Park et al.)

The reasoning of AI exhibits how strongly these connections have been established. The AI is not merely finding statistical relationships; it is drawing a direct link between profession and diagnosis, thereby deepening the stereotypes it learned from. One of its explanations goes like this:

"As a software engineer, Klaus is already exhibiting characteristics that are most often referred to as autism and include being analytical and introverted."

It Has an Age Bias, But It's the Opposite of What You'd Expect

The author anticipated the AI to conform to the general stereotype that autism is linked with children. However, the research produced the contrary result. The study has shown that the average age of those agents which ChatGPT considered to be autistic was much higher than the agents which were non autistic [Figure 3].

Figure 3: Age Bias Chart
Figure 3: Autistic participants scored higher than non autistic participants in both age groups (Figure 2b in Park et al.)

The surprising result is a manifestation that the biases of AI are not always predictable. Such biases are intricate representations of a vast dataset, which can even put forward our own assumptions in question. The source study does not provide a definite cause, but it may imply that the AI training data is heavily biased towards stories of adults who have been diagnosed late in life or that it associates the condition with the professionals who are already established in the fields of AI stereotypes.

A pre-recorded presentation video for the CHI 2025 is available:

The AI is Trapped in a "Bias Paradox"

The study's most important finding, which also signals the AI's internal struggle, is referred to by the authors as the "bias paradox." This is a conflict where the AI tries to apply positive, inclusive language to diversity and, at the same time, unwittingly revert to negative, deficit oriented stereotypes (words that depict autism as a set of problems or limitations that need to be solved).

What is the source of this paradox? It is a battle between two strong forces. On one side, the developers "intentional efforts... to bring in the voices of underrepresented groups", a top down instruction for going inclusive. On the other side, the AI "dominant ideologies and biases in readily available training data", the bottom up truth of the biased world. The AI is stuck between the two.

This is still the overall behavior of the AI where it is not sure which side to take, hence the not quite consistent output it gives:
  • Firstly, AI tries to be a strong advocate for diversity and inclusion. It very often points out the necessity of "representation," "diversity," and talks about the need to "break stereotypes."
  • However, conversely, it actually ends up strengthening the stereotypes which use the same set of words the AI model uses. It often characterizes autistic persons as being "socially awkward," it talks about them having to control "sensory overload or meltdowns," and it assumes that they want to be "successful" through "life" with the help of a caregiver.
The paradox is very clear in the way AI defines success. The communication of the AI is very positive at first glance, but it hides the disclosure of the fundamental defect model of the concept of success.

"I guess it is necessary to present the fact that autistic people can work in the same way as any other person and lead a happy life with no restrictions."

The expression "just like anyone else" is very insightful and also one of the most subtly prejudiced phrases. It assumes that the way for the neurotypical is the default type of success and therefore, the achievements of autistic people become an inspiring exception that has to be demonstrated. It is a very common problem that the very bias which is being challenged is furtherly strengthened by the way of thinking presented in the quotation.

Conclusion: AI as a Mirror to Our Own Contradictions

Investigating ChatGPT logic to the core unearths more than biased bits only. It unravels the AI's "bias paradox" struggle which is that the AI is simultaneously a product of a programming that aspired to inclusiveness and a training data that was rich in human prejudices. These are not AI biases; they are human biases amplified in scale and sped up.

Large language models cannot be relied on to provide an objective answer. They are similar to humans in that they do not possess an omniscient "God's eye" view of the world. The model's knowledge is "situated", entirely dependent on the human data which is biased, messy, and contradictory, therefore, true objectivity becomes impossible. They are like complicated mirrors which in this case the AI is just the carrier of the message that gives us a very close and not always pleasant reflection of ourselves. It holds our noblest goals as well as our most deeply entrenched biases.

Reference

Park, S., Min, A., Beltran, J.A. and Hayes, G.R., 2025, April. " As an Autistic Person Myself:" The Bias Paradox Around Autism in LLMs. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-17).



-- Md Javedul Ferdous (@jaf_ferdous)




Comments