Skip to main content

AI for Remediation

How we use AI for "remediation." And why we put it in quotes.

Remediation, reschmediation.

As the accessibility business grew up, long before Evinced, it grew up as a consulting service for mostly government entities who had large backlogs of pdfs and websites with accessibility problems galore. That's where the term "remediation" came from, and frankly we hate the word.

Because it presumes (as the industry did, before Evinced) that the way to improve accessibility was to fix problems AFTER they were created. Remediation literally means "to make right," which means somebody at one point made it wrong.

That never sat well with us. We think the focus for any given enterprise should be to build a product development process that simply doesn't make accessibility mistakes in the first place.

Can AI help with that it? Yes, it can. Is AI going to race through your product website and "remediate" any accessibility problems that it finds? No it's not, and you should be wary of anyone who tells you it will.

But an AI system, properly constrained by accessibility expertise, can provide a lot of help to a development organization with a pile of backlogged defects. Let's go through a few of the ways.

Help desk and code review

Most companies are hard-pressed for accessibility expertise. So they either have no experts in house, or if they do, those experts are swamped with requests for assistance. We've seen companies whose accessibility teams average a 3 month turnaround on requested reviews! So engineers, in the face of that delay, are forced to go hand-to-hand over the internet looking for an authoritative answer to their question. (Or they just give up.)

A chatbot, built on Generative AI that's been properly constrained for accessibility expertise, is a huge improvement there, and in benchmark testing our Chatbot is on the order of 3X better than a native AI tool. And it can be deployed on practically any work surface: Slack, Teams, the web, and even directly inside an IDE like VS Code.

It's a huge improvement just waiting to happen at nearly every company, and it can be used by everybody in the enterprise -- from front desk reception staff wondering what the best way is to greet someone who's blind, to product managers thinking about what's needed, to designers and engineers in the trenches of getting something done.

But this help desk function is by no means the most exciting part of an AI system when it comes to accessibility bug prevention and "remediation." The most exciting part, in our opinion, is code review.

A developer can simply paste in, or in some cases just highlight, a section of code that they need help with or review on. And a good AI system (like ours) can pinpoint all the accessibility problems immediately and return fix suggestions in the exact same language that the code is in. The developer will still need to review the code of course, but the headstart is amazing.

You could even ask our AI system to tell you the 3 most important problems to fix first, since we are automatically computing all that in the background and reporting it anyway.

AI coding...with testing loops

AI coding systems are a very new thing, but they promise some radical benefits for accessibility.

Because now a developer can use one AI system to advise another AI system, in real time, about accessibility.

This works via the Model Context Protocol and you can read more about our MCP Tools here.

But imagine you are using an AI coding system like Cursoropens in a new tab (or Windsurfopens in a new tab, Tabnineopens in a new tab, GitHub Copilotopens in a new tab, etc.) and you want it to to build something for you, accessibly. You can set up your coding system to automatically query Evinced for advice before embarking upon building whatever it is you asked, and -- somewhat incredibly, to anybody who has young children at home -- your AI coding system will take that advice very seriously.

That's not all. Once the thing is built, your AI coding system can be configured to ask Evinced to test it, and bugs discovered can be automatically tackled and even iterated on by the coding system until the tests come back free of bugs, or free of critical bugs (as the developer prefers).

Note that many developers try this at home, but without the testing loop. In our view, this is bound to lead to some serious accessibility trouble. We've tested all the major coding models, and left to their own devices they simply will not produce good accessibility results most of the time. Put another way: it is very easy to ask an AI coding system to make something with accessible code. But it is very hard to get that code to actually be accessible, and to be confident enough about its accessibility to commit it. At least, without Evinced.

Better fix suggestions from LLMs

At Evinced, we also use Generative AI and Large Language Models ("LLMs") to help with various aspects of recommended fixes for bugs that were detected with other techniques.

For example, we might well use Machine Learning to detect that the landmarks structure on a web page has problems. Native HTML landmarks like <header> and <nav> and <section> and <form> are particularly important for screen reader users, because they enable the user to skip around in the page quickly without having to go through every single word on a page, one word at a time.

If we detect that the <header> tag is missing, that's only part of the problem. The other part is suggesting to the developer what on the page should be tagged as <header>.

And that's where LLMs come in. We use them, in our constrained way, to make specific code suggestions about how and where to fix something. We also use them to automatically generate reports based on the data we can expose to them.