I am writing this article to hopefully explain the basics on how a screen reader/magnifier obtains information to provide feedback to the user. I am going to use JAWS (Job Access With Speech) for Windows to discuss as it is personally my screen reader of choice, however, other screen readers/magnifiers use similar methods, and I will point this out as this article progresses.
The first technique to mention is called “screen scraping,” which was a popular method use by screen readers/magnifiers in DOS times. This method, however, got pushed to the back seat with the introduction of Windows and the Graphical User Interface (GUI).
JAWS uses another method to populate its OSM (Off Screen Model), starting with Windows 95/NT 4.0 Service Pack 4. Note: Although Windows 95/98/ME did not utilize the video chain, Windows NT 4.0 SP 4 along with all versions succeeding Windows 2000 starting with Windows XP, and this is what I would like to focus on.
JAWS along with other screen readers/magnifiers injected their own video driver into the operating system video chain. You can attempt to picture this as funnels sitting one on top of another allowing the content to flow through. Following is an example of this:
VideoDriver.dll -> JAWSVideo.dll -> VideoDriver.sys
In the above analogy of using the concept of funnels, the video driver for JAWS is the middle funnel. Basically the video driver gets information from the operating system, then allows it to flow through to the JAWS video driver, then the content flows through to the system driver.
Once JAWS has the content, it populates the OSM which is essentially the brains for the screen reader. The OSM is where most if not all decisions are made for what gets spoken to the user and when.
Microsoft then introduced mirror technology which is basically what it sounds like, it mirrors the content from the video driver as it passes to the system driver.
Enough of the basics of the video chain, let’s move on to some other methods that assistive technology uses to provide feedback to the user such as MSAA(Microsoft Active Accessibility). MSAA has essentially evolved into what is been implemented for assistive technologies to hook into to interact with the Windows operating system and applications called UIA (UI Automation).
Last but not least, will mention the use of a DOM (Document Object Model). Microsoft exposes a DOM API for several of its applications for many things,one of which is accessibility and can be observed mostly in the Microsoft Office Suite of Applications. Many other applications such as web browsers also expose a DOM API.
Well, OK, that wasn’t the last item to discuss. What about the internet? Many screen readers use many of the methods already mentioned in obtaining content and then parsing the rendered HTML which several screen readers/magnifiers use to populate what is known as a virtual buffer so that the users can read and interact with web pages as if a sighted person were viewing the page.
Screen readers/magnifiers do not affect the internet in any way, all they do is interface with the methods and content rendered by whichever browser is being used by whoever’s choice.
I could have gone quite a bit more into technical detail, however, this is just the basics so whomever reds this doesn’t begin to fall asleep. So in closing , thanks for reading and I welcome any feedback.