How blind users "see" websites: a conversation about accessibility, navigation and UX
We are used to discussing website accessibility through standards: WCAG, national requirements, and checklists. In practice, though, digital accessibility is not about ticking boxes. It is about being able to complete a real scenario: orient on the page, perform an action, and reach a result.
We spoke to testers who work with screen readers every day — VoiceOver, NVDA, JAWS, and TalkBack — about how they approach unfamiliar interfaces, why one error can collapse an entire user journey, what has changed in web accessibility over the last few years, and where AI genuinely helps.
This conversation grew out of our joint work on Special View, a portal for people with different visual abilities: our team handled development, and our colleagues handled accessibility testing and certification.
Participants
- Vadim Smirnov — Creative Director and founder of OKC.Media
- Sergey Syrtsov — blind user and accessibility tester
- Evgeny Arnapolsky — head of ANO DO “Centre I2T”
- Anatoly Popko — founder of ANO DO “Centre I2T”, expert of the Public Chamber of the Russian Federation, and one of the co-authors of GOST R 52872-2019
Key takeaways
- The first seconds decide everything: if headings, landmarks, and labels are not structured well, users immediately understand that navigation will be difficult.
- Accessibility is not a separate scenario: blind users go through the same journeys as sighted users — the channel is different, but the need for structure is higher.
- One critical error can break the entire process: an interface may be “almost accessible”, yet one inaccessible element can make the whole chain unusable.
- Accessibility testing should start earlier: it is cheaper and more effective to address accessibility at component and system level than to patch a finished product.
- Accessibility is both social responsibility and business value: people choose services they can use independently — even if they are more expensive.
1. How to tell in the first seconds whether a website will be usable, and how blind users get acquainted with a page
Vadim: Let us begin with the first question. If we step away from specific tools: by which early signs do you understand that a page is well made and likely to be convenient? And when does it become clear that difficulties are ahead?
Anatoly: When I open a page, I immediately move through elements with a screen reader. If there are many unlabeled buttons and headings are not marked up properly, it is clear the site will be hard to use.
But if the first 5 to 10 elements are labeled and by pressing the “1” key I land on H1 at once, even on a page I have never visited before, that means the developer has done things properly, and orientation will be convenient.
You can see this quickly even in simple scenarios. For example, in Yandex or Google search, it is logical that autofocus lands in the input field. If it does, things already work as expected.
Evgeny: My experience is this: I start by assuming a site is built with accessibility in mind. On an unfamiliar page, I try to assess it quickly with two techniques.
First, moving by headings. If headings exist, there is usually a good chance I can understand structure and content quickly.
Second, moving by landmarks and regions. If the page is marked up, I can get the layout almost right away.
So if there are no headings or landmarks — that's already a red flag: getting acquainted with the site and navigating further will be significantly more difficult.
Sergey: I agree with Anatoly and Evgeny, but I want to add why this happens in the first place. When you open a page and cannot see it, the first task is orientation. Headings and landmarks exist precisely for that: to understand where everything is, without moving through every element one by one.
That is why a main page heading is so important — ideally H1. Once you move to it, you instantly understand where the main content begins. Everything above is usually header and navigation. It becomes your point of reference: where to look next and where to move.
When you cannot “scan” a page visually, keyboard commands become your map: moving by headings and landmarks. There are also finer techniques for familiar pages — for instance, sometimes it is faster to move from the end rather than from the top. But the core logic is simple: proper semantic markup gives blind users a clear mental model of the page and helps them find what they need quickly.
Vadim: Sergey, if I understood correctly, you are talking about an expected page structure — for example, menu at the top, then the main heading and content. Are there such “stereotypes” that especially help with orientation?
Sergey: It feels like the common pattern, though exceptions exist. Usually we expect an upper area — header, usually including navigation (collapsed or expanded — that is detail), and a main area where content begins.
In essence, this is the same expectation sighted users have: they also expect information to be structured — some parts emphasised, some larger, some smaller. We expect the same from landmarks: they help us quickly understand the structure and find needed information.
2. Is there a specific UX pattern for blind users
Vadim: This brings us straight to UX patterns. Are blind users’ scenarios different from sighted users’ scenarios — especially in complex interfaces such as e-commerce: menus, filters, and checkout? What do you expect from such a path, ideally?
Evgeny: If we discuss convenience, it is largely individual — for sighted and blind users alike. In e-commerce, however, the key thing for me is different: full access to all functionality needed for the scenario. I must be able to open a catalogue, choose products, read reviews, add to basket, and complete checkout. If all key elements are accessible and work properly, that is already a strong sign I will return.
As for convenience, I am not very keen on speaking about it in absolute terms. Of course, if I am offered several options, I will pick the simplest interface where I can understand things faster and spend less time. But we cannot demand identical “ideal” scenarios from all shops: an interface may be complex not because designers built a quest, but because it supports many functions.
In any case, it will be harder for a blind user than for a sighted one, even with a simple interface, because you need time to learn structure and understand the path. But once familiar with it, the principle is unchanged: all functionality must be accessible, so you can press controls, perform actions, and achieve the goal that brought you to the service.
Anatoly: Convenience and accessibility are different, and accessibility is more fundamental. If we talk about a truly convenient e-commerce interface, I would cite Yandex Lavka — both app and web. A catalogue is shown on one page; sections are marked by headings; each product is a link, with only the product name inside the link and product details directly after it. No unnecessary elements.
As a result, users can quickly browse sections by headings and products by links. If a product is interesting, they can view details, add it to basket easily, and continue. It is one of the most efficient interfaces I have seen. When you land on such a site, you genuinely feel like ordering something — even if you did not intend to.
Evgeny: But here is another important point. In the simple interface Anatoly described, a newcomer may still struggle. To perform even basic actions, a blind user must first learn to use a screen reader and, in general, to navigate digital services. If a person lacks those skills, even a good interface will feel hard — but that difficulty is not caused by the service itself; it comes from lack of screen-reader literacy.
Sergey: I'd say it's less about abstract convenience and more about how efficient the interaction actually is.
For example, in online shops with many filters, it is difficult to keep track of 4 selected checkboxes out of 15. What is needed is a clear, accessible summary of current filter state — a prompt showing what is selected right now.
This is useful not only for blind users, but for anyone who wants to orient quickly and move forward.
Vadim: If I understood correctly, two points follow. First: blind users need a simple alternative path — as clear as possible, without extra complexity, even in large catalogues; the complex path can remain an additional option. Second: it should always be clear what is happening now — the current state of elements should be visible in any scenario. That seems to be a universal need, regardless of users’ abilities.
Evgeny: Yes.
3. What formally passes standards but fails in reality
Vadim: Which accessibility issues most often pass WCAG or local standards formally, yet still interfere with real use via screen readers? Do you have examples where standards are followed but practical design should still be different?
Anatoly: In most cases, when something is implemented according to standards, behaviour is predictable and expected. Across platforms and components, there is a common pattern, and a good component behaves as described in specification.
So the idea that “do it by standard and it will be formally accessible but inconvenient” is, in my view, highly exaggerated. Standards usually provide a clear and proven behaviour model.
Sergey: I thought about this question as well and cannot give a strong example where everything follows standards yet remains inconvenient. If standards are followed properly and the implementation is careful, issues usually do not appear. Hidden issues can exist, but that is not the case of “standard-compliant but inconvenient”. If a component is done properly and fulfils its role, it should not create problems.
4. At what stage to involve accessibility testing
Vadim: At what stage are you usually involved in accessibility testing — design, prototypes, early versions, or final stage? And how would you describe an ideal process?
Evgeny: Unfortunately, most teams approach us when a site or service is already built. That means extra work — more time, more budget, basically reworking a finished product. It is much better to include accessibility from the very start, during product design. That saves resources and improves quality from day one.
Sergey: I would add this. When there is a finished interface, discussion becomes concrete: we see exact elements, understand what works and what does not, and can point to specific fixes. It is practical, but it requires more effort and time than doing things earlier. We have less experience in truly early collaboration; being in a chat is one thing, building a responsive process during development is another. In practice, work often starts with an existing website or app, which increases complexity for developers.
Evgeny: Recently we had a project where we joined from the very beginning: a museum landing page for visitors with different limitations.
First we run an introduction: a short lecture or webinar for manager, designer, and developer. We show how screen readers work, how to move by headings, landmarks, and lists. We explain that for us the key point is not large visual text or images, but whether the interface can be understood by ear — whether the screen reader can correctly announce elements and communicate what is happening.
After that introduction, the team, already equipped with core concepts, designs the interface. When the interface appears, we join as testers: we inspect it from the perspective of blind users, identify what works and what needs correction. Then we have a practical loop: review, analysis, and revision.
This sequence — basic education, then development, then testing — makes the process more conscious and efficient than involving us only at the end.
Anatoly: The core idea is simple: the closer to the roots of product development you start considering accessibility, the cheaper and more efficient the result. But there are two separate questions: accessibility testing and accessibility-aware development.
If we are talking about testing, Sergey is absolutely right. You can't really test accessibility on static mock-ups. To test accessibility, you need a real live interface. Before that exists, discussing testing is mostly pointless.
If we are talking about accessibility-aware development, then during interface design and tooling selection, it's better to pick a framework that already ships accessible components. During implementation, if accessibility is kept in mind, teams avoid heavy customisation without necessity. And if they do customise, they can preserve accessibility immediately.
Vadim: Suppose we are at the final stage: we compare UX options, test hypotheses, and choose a final version. Can we, at this stage, evaluate screen-reader behaviour using real examples from other websites, identifying strong and weak paths before approval?
Anatoly: If we discuss user journeys, splitting users into blind and sighted is not always useful. Digital accessibility is largely about components. If a journey convenient for sighted users is made accessible, it will likely be convenient for blind users as well. The key is that elements — for instance basket notifications — remain available to screen readers.
Vadim: Then there is a speed issue. Visual processing is usually faster than listening to a screen reader. If screen-reader output is slower, users are delayed more than we would like, and this may influence decisions along the journey.
Anatoly: Look, the interface developer’s task is not to remove a person’s disability. That is not your task.
Vadim: So, to summarise, you are confident the journey should stay the same, and we should focus on specific components?
All: Yes.
Vadim: So there is no need to invent separate alternative journeys that are better for screen readers but worse visually?
Sergey: It is hard to imagine a case where everything is done according to standards and still inconvenient. First, standards themselves imply several paths. If a task is complex, there should be at least one alternative way to get where you need to go. That is not about “special users”; it simply increases the chance of completing the task.
Second, a practical example: in an online catalogue there may be many categories and subcategories, which can be lengthy in some cases. But if an alternative path exists — for example search, where you enter the product name directly and jump to it — that is natural. There is no need to design a separate path only for screen readers; what matters is to offer multiple paths and let people choose.
Anatoly: Let me illustrate two points. First, what Evgeny mentioned about text optimisation. When there are numeric data, it is better to place numbers before constant text. For example: “Mail, 23 new messages” rather than “Mail, new messages, 23”.
He also mentioned that interface text should not be overloaded. Do not put hints into control names. If a button name contains “click”, that is poor interface writing. And it is poor for sighted users as well — they do not want to read “click” in a button label either.
Vadim: Of course.
Anatoly: Here is another example that supports Sergey’s point. Suppose there is a date-picker. You can design it so the interface adapts to activation method. Press the button and focus goes straight into keyboard date input while a calendar appears alongside. If the user works by keyboard, typing digits is easier. If the element was activated with a mouse, the user will likely choose from the calendar instead of moving hands off the mouse. If we can consider such differences, we should.
Sergey: Yes, excellent example.
Evgeny: Yes, genuinely a strong example.
Sergey: That is probably the answer to Vadim’s question.
Vadim: Yes, exactly. Thank you.
5. How web accessibility is changing (and where AI helps)
Vadim: Let us move on. How would you assess the current state of website accessibility? What has changed over the last three years? What remains a systemic problem?
Evgeny: This is my subjective impression, and it may differ from the others. But now there are many more websites that can be used one way or another. Why? Probably because platforms and components used to build websites have become more accessible, and because people have started paying more attention to digital accessibility — trying to make interfaces that can be used by different people with different limitations.
Even so, I feel that despite growth in accessible websites, much still needs to be done — especially in legislation and in education for managers, developers, and designers. Even when I now look for websites with obvious accessibility failures for webinars, it is harder than before to find something “visibly bad”.
Anatoly: There is the WebAIM Million study — an automated analysis of high-traffic pages across the web. According to it, around 95 to 98 per cent of those pages contain at least one accessibility error, and average error count per page is about 50. From that perspective, the global situation has not improved much year to year.
This partly aligns with Evgeny’s subjective impression: yes, good solutions exist. But I still cannot describe the overall state of web accessibility as good.
There are especially many issues in professional tools: task managers, spreadsheets, documents, and similar services often remain poorly accessible. Even some domestic operating systems are built without sufficient attention to digital accessibility standards. As a result, many practical difficulties remain, and this part of blind users’ experience is still far from ideal.
Evgeny: Yes, I agree with Anatoly. For ordinary informational websites that we visit most often, obvious issues seem less common to me now. But as soon as we move to professional tools, the picture changes. Requirements are higher there — because it is a working environment, and accessibility determines not comfort but ability to work. And in this area, there are still many issues.
A good example is one well-known service. It is built by a company that has publicly promoted digital accessibility for years, with courses and materials — yet their own product remains, to put it mildly, inaccessible for blind users, and likely for others as well.
Vadim: If I understood correctly, the main problems begin not on simple informational pages but in complex tools and end-to-end scenarios — where a user must complete a process entirely via keyboard and speech output. Is that right?
Sergey: Yes, that is crucial. If you open a page where there is simply text and headings, everything may look more or less fine. Real interaction starts when you need to do something: open a menu, choose an option, fill in a form, complete steps. That is where issues most often begin.
So it is difficult for me to say simply whether things are better or worse: the internet itself has changed. Ten to fifteen years ago, we mostly went online to read or watch something — mostly static content. Now we come to perform actions: process documents, buy things, fill in, submit, create. And in such contexts, every detail matters.
A typical situation: everything works until the final step, and then a dropdown appears that cannot be used with a screen reader. That is the end of the process. Before that, everything looked “almost accessible”, but completion becomes impossible.
Over time, you may adapt to small issues, but that does not mean they are acceptable. One critical error in a chain is enough to prevent a user from completing the scenario independently. That is why accessibility must hold at every stage — from start to result.
Vadim: Thank you, this is a very important point: sometimes one small step is enough to make the entire process inaccessible.
Sergey: Yes, exactly.
Vadim: How does AI help in your daily use of websites? Are there tools built into screen readers, or working alongside them, that genuinely help?
Sergey: There is a typical case: one or several buttons on a website are unlabeled. AI can sometimes genuinely help here. I have had several occasions where I took a screenshot and sent it to Be My Eyes to recognise what group of controls I was dealing with and find the one I needed.
VoiceOver and TalkBack have features that try to infer button purpose by icon: they see a gear and suggest it is settings, even when no text label exists. That often helps when an interface is “not fully accessible”. But it is important to understand: if an interface is truly poor, AI will not rescue it.
Another example is image description: where there is a graphic link, recognition may at least suggest what the author meant. Overall, AI is currently a targeted supporting tool. It does not replace proper accessibility rules.
Evgeny: In general, not very often in my case. Usually the issue is solved more simply: if a service is inconvenient, I switch to an alternative where required scenarios can be completed faster and with fewer obstacles.
Sergey: Yes, workaround strategies are a normal part of experience. But it is highly individual. For example, for some time I used Telegram web in addition to Telegram on iOS. Later I realised a different combination was more efficient for me: Unigram on laptop plus Telegram on iOS. Each blind user builds their own set of tools.
Evgeny: Telegram is a good example. On Windows we often use Unigram, together with an add-on developed by blind engineers. It helps in two ways at once: improves accessibility and noticeably accelerates work.
After installation, you get additional hotkeys for frequent actions. This is especially important when Telegram becomes a working tool: many chats, many operations, and the need to orient quickly and execute tasks. In that mode, such extensions genuinely increase efficiency.
Sergey: Absolutely.
6. Accessibility: social responsibility or business effect
Vadim: And the last practical question. Is digital accessibility first and foremost social responsibility, or is it a business decision with measurable effect? This is exactly what any client asks as soon as the topic comes up. How would you answer?
Sergey: Both.
Evgeny: Yes, I think it is both social responsibility and business. These things should go together — one does not exclude the other.
Vadim: Can business measure the effect of accessibility? If yes, how?
Sergey: For services like e-commerce, there is definitely an effect. Measuring it in strict numbers is often difficult. But from my own perspective: if I can choose between a predictable, understandable interface and one where I have to guess and click around blindly, which one will I use?
Some services cannot be used independently at all. Then I think ten times whether I need them. In essence, this is similar to the broader question of intuitive interface design: how do we measure impact afterwards?
But the basic logic is straightforward: making an interface accessible does not make things worse. If a blind user chooses between an accessible service and a less accessible competitor, the accessible one gets the preference, all else equal. How big that advantage is may vary, but as a way to combine social responsibility and business interest, it works.
Evgeny: If I answer quickly and honestly: business primarily builds interfaces for sighted users — they are the majority, and visual UX directly affects sales. That is normal.
If you ask “how much money accessibility brings”, then focusing only on blind users, effect may look small — we are indeed a smaller group, and active screen-reader users are fewer still.
But accessibility is not only about blind users. It is also about people with different disabilities, age-related changes, and temporary limitations. If we cut all these people out, that's a huge chunk of the audience gone.
A useful example: Yandex research on accessibility settings. About 51% of people reported using such features — increasing font size, enabling dark theme, using subtitles, and so on. This shows accessibility is not niche but mass. Another key point is choice. If two services solve the same task, I choose the one I can actually use — even if it is more expensive.
For example: one grocery service may be cheaper but inaccessible. Another is more expensive but works with screen readers. In practice, I order where I can complete the order independently. And this is not only my case — many blind colleagues choose accessible services even when they are objectively more expensive.
If we have several pizza apps, there can be many reasons to choose one, but accessibility is often decisive. I need an app where, with a screen reader, I can browse options, choose toppings, adjust ingredients, and complete checkout. So I almost always choose the accessible service — even if the pizza there is slightly less tasty — simply because other options may not be usable for me.
Vadim: Thank you all. This conversation confirmed something important: accessibility is not a layer you add on top — it is part of how a product works. The points you raised — about structure, about components, about one broken step killing the whole flow — these are things any product team needs to hear.
Sergey: I hope so. These things really do stay outside most discussions.
Evgeny: Agreed. And the more such conversations happen, the better.
Vadim: Our joint work on the Special View portal was built on exactly these principles, and its recognition by Rating Runeta in 2025 is a good sign that a systematic accessibility approach and overall product quality go hand in hand. Thank you all — a very honest conversation.
Glossary: key digital accessibility terms
- Screen reader
- Assistive software that converts on-screen content into speech or braille. Common options include NVDA and JAWS on Windows, VoiceOver on Apple devices, and TalkBack on Android. Screen readers read semantic markup rather than pixels, which is why code quality directly affects user experience quality.
- WCAG (Web Content Accessibility Guidelines)
- International guidance for web accessibility developed by W3C. It is built on four principles: perceivable, operable, understandable, and robust. Level AA is the baseline target for most projects. Accessibility legislation in many countries relies on WCAG.
- GOST R 52872-2019
- Russian digital accessibility standard based on WCAG 2.1. One of its co-authors is Anatoly Popko, who took part in this conversation.
- Keyboard navigation
- Ability to use a website fully without a mouse. It is a core interaction mode for screen-reader users and people with motor impairments, and a baseline WCAG requirement.
- Semantic markup and landmarks
- Using HTML elements according to their purpose: navigation, main content, headings, buttons, and so on. Screen readers rely on this structure for rapid page navigation — similar to how sighted users visually scan a layout.
- Contrast ratio
- Brightness difference between foreground and background. Critical for low-vision users and for anyone reading screens under bright sunlight. WCAG defines minimum thresholds and checks can be automated.
- European Accessibility Act (EAA)
- EU directive requiring accessibility for digital products and services from June 2025. For businesses operating in European markets, digital accessibility is a legal requirement.
- WebAIM Million
- Annual large-scale study of accessibility across one million highly visited web pages. According to recent reports, over 95% contain at least one accessibility error.