disability insights logo with clipart image of the AI symbol with a blue background disability equity in AI

Addressing Bias and Ableism in AI

Integrating Accessibility into Product Development

By Deborah Dietz

When I first heard about AI tools like ChatGPT, I was deeply skeptical. I learned about them through a training cohort for nonprofit leaders, where the focus was on using AI to streamline grant applications and fundraising campaigns. My immediate reaction? No thanks.

As someone who has spent years crafting thoughtful, purpose-driven work, the idea of submitting my “work product” into an AI system, feeding it into a giant learning model, made me uncomfortable. I’ve always taken pride in the unique voice and values my organization brings to grant writing and advocacy. I wasn’t ready to risk losing that.

But things changed.

My mindset began to shift when I was launching Disability Insights, my new company focused on helping businesses become more inclusive and accessible. My coach from the SBA (Small Business Administration) suggested that AI, specifically ChatGPT, could be a powerful tool for me. Hesitantly, I opened an account and began experimenting. I didn’t include my name, my organization, or any proprietary information at first. I just played around.

And I was blown away.

What I discovered was that ChatGPT could act as a powerful assistant, helping me break through writing blocks, offering quick drafts based on my outlines, and accelerating my workflow. It wasn’t replacing my ideas or creativity. It was enhancing them.

My Process + AI = Efficiency

As an executive director and content creator, I’ve always followed a methodical process. I start with an idea, build a comprehensive outline, and only then begin writing. But my challenge has always been that awkward moment of staring at a blank screen. Once I get started, the words flow but that starting point can freeze me.

That’s where ChatGPT has changed the game. Now, I take my outline, provide tone and length guidelines, and ask for a draft. In seconds, I have a starting point. Then, I do what I do best: refine, edit, and make it truly mine. The time savings? Huge. The frustration? Gone. The quality? Even better than before.

It’s like having a personal writing assistant who’s available 24/7, never takes offense at edits, and remembers all my past work. What’s not to love?

Well… there’s one thing.

The Problem: Built-In Ableism

As much as I appreciate what this AI assistant can do, I’ve noticed a consistent issue: the use of ableist language.

For those unfamiliar, ableism refers to discrimination (both subtle and overt) against people with disabilities. It’s not just about physical access, like stairs with no ramps or inaccessible bathrooms. It’s also about language, attitudes, and assumptions.

Ableism shows up in microaggressions — like calling someone “brave” for doing everyday tasks, speaking to an interpreter instead of directly to a Deaf person, or touching someone’s wheelchair without permission. It also shows up in the words we use, often without realizing the harm.

In my experience using ChatGPT, I’ve repeatedly encountered ableist phrasing, especially in references to sensory or verbal communication. Here are a few examples:

Example 1:

ChatGPT said:
“The Wallet Card is a tool that helps you feel safe, confident, and in control when talking to the police. It gives you a way to speak up for yourself…”

My response:
“I don’t want to use ‘speak up.”

ChatGPT revised:
“It gives you a way to share important information about yourself in a clear and respectful way.”

That change may seem small, but it matters. Not everyone communicates verbally. Using the phrase “speak up” excludes individuals who use communication devices, sign language, or other forms of expression. The revised sentence is much more inclusive—and much more in line with my values.

Example 2:

ChatGPT said:
“…this guide will give you something powerful—a voice that’s ready before it needs to be used.”

My response:
“Please remove the word ‘voice.’”

ChatGPT revised:
“…this guide will give you something powerful—a plan that’s ready before you ever need to use it.”

Again, replacing “voice” with “plan” avoids unintentional ableism. Not everyone has—or uses—a voice. But everyone can benefit from a plan.

Despite consistently correcting these patterns, I’ve noticed that the system doesn’t retain these lessons. Each time, I must make the same corrections. The AI doesn’t seem to understand that this language is problematic.

That’s a red flag.

Why This Matters

As I’ve become more attuned to this issue, I now see ableist language regularly on websites, in blogs, and in social media posts. It’s everywhere. And much of it likely comes from people using AI-generated content without fully understanding its implications.

That’s why this issue needs more attention.

If we are going to rely on AI to help shape the future of communication, we must ensure that accessibility and inclusion are built into these systems from the start. Otherwise, we risk reinforcing the same discriminatory norms that people with disabilities have fought against for decades.

AI has enormous potential to help people, but only if it truly works for all people.

Looking Ahead

I haven’t tested every platform out there, so I can’t say whether this issue is unique to ChatGPT or systemic across all AI tools. But I hope it becomes a priority.

Because words matter.
And the meaning behind those words matters even more.

If AI is to be our assistant, our co-writer, and even our teacher, it has to learn not to perpetuate bias.

Let’s push for AI that understands inclusion, respects diversity, and supports equity, especially for people with disabilities. Let’s ensure that progress doesn’t come at the expense of anyone being left out.

Have you noticed ableist language in your AI tools? Have you found a more inclusive platform? I’d love to hear your thoughts. Let’s keep the conversation going.