top of page

Eyes Wide Open: How AI Is Giving People with Low Vision a New Way to See the World

Black background blog header image with sage green and white text reading 'Eyes Wide Open: How AI Is Giving People with Low Vision a New Way to See the World.' A comic-style illustration shows a person using a smartphone to scan their surroundings, with AI recognition icons floating around them. The BlindSpot Solutions eye logo appears in the lower left corner.

Picture this. You are standing in the cereal aisle, staring at a wall of boxes that all look roughly identical. You pick one up. You tilt it toward the light. You squint heroically. You put it back. You pick up another one. At this point you are about 60% sure you have selected a nutritious high-fibre breakfast and about 40% sure it is dog biscuits.


This is a regular Tuesday for a lot of people with low vision. And for a long time, the options were pretty limited: ask a stranger, bring someone with you, or memorise the shelf layout and quietly hope no one had done a restock.


AI is changing that. Not in a big flashy "robots will save us all" kind of way, but in quiet, practical, genuinely useful ways. Visual AI, which is basically the ability of a device to look at something and tell you what it is, is starting to give people with low vision a level of everyday independence that many had simply stopped expecting.


So What Is Visual AI, Exactly?

In short: you point your phone camera at something, and an app tells you what it sees. A person, a product, a sign, a document, a button. That is the core of it. No special hardware required. Just a smartphone and the right app.


Tools like Microsoft's Seeing AI, Google Lookout, and Be My AI have been building this out for a few years now. They are impressive, not in the way that gets a standing ovation at a tech conference, but in the way that matters to someone who has not been able to read their own mail independently for ten years.


Knowing Who Just Walked In

One of the most useful things visual AI can do is recognise faces. And before anyone raises an eyebrow, this is not about surveillance. It is about being able to walk into a meeting room and have your phone quietly let you know that the person heading your way is your colleague from the third floor, not someone you have never met.


For people with low vision, not being able to identify faces carries a social tax that most people do not think about. Accidentally walking past someone you know. Blanking on a name because you could not see who was talking. Overcompensating with an enthusiastic "hey!" to absolutely everyone, just in case. These things add up.


Seeing AI lets you save faces with names attached, so the app can recognise saved people when they appear in view. It is not flawless, and lighting matters a lot, but the technology is improving fast and the direction of travel is clear.


Back to That Supermarket

Product identification is one of the most well-developed features of these tools. Scan a barcode and most apps will read out the product name, description, and nutritional information. Point the camera at packaging without a barcode, and AI can often identify it anyway.


We are talking about reading expiry dates, telling tomato soup from lentil soup, identifying the right medication from a row of near-identical boxes. Tasks that sighted people do in a few seconds without thinking. Tasks that, for someone with low vision, have historically meant either asking for help or taking an educated guess. Visual AI removes that dependency. It is just there, in your pocket, whenever you need it.


Signs, Streets, and the Lift Button Situation

Getting around in built environments is one of the trickier parts of life with low vision. Street signs, building numbers, door labels, platform signs, lift panels. They all assume you can see them clearly. Visual AI is starting to fill those gaps.


Text recognition built into these apps can read street signs aloud. Some can give a sense of what is around you and where. For someone who navigates largely from memory and familiar landmarks, having an app that can confirm a street name or read a door number makes a real difference to how freely and confidently they move through the world.


And then there is the lift button. This sounds like a small thing. It is not. Many lifts have tiny embossed numbers that are genuinely hard to distinguish, especially in older buildings where the tactile markings have worn down to almost nothing. Hovering your phone near the panel and having the app tell you which button is which floor is exactly the kind of mundane, invisible, completely necessary assistance that changes the experience of being in a building. Trust me, pressing the wrong floor three times while people watch is character-building in all the wrong ways.


It Goes Further Than Reading Text

Where things get genuinely interesting is scene understanding. The better AI tools, like GPT-4 Vision or the AI layer in Be My Eyes, do not just read text. They describe what is happening. Point the camera at a room and they can tell you how it is laid out, what is on the table, whether there is anyone in it.


You can photograph a handwritten note and have it read back to you. A restaurant menu. A letter from your bank. A prescription label. A piece of clothing (yes, including what colour it is, which is genuinely useful when you are trying to leave the house looking intentional rather than accidental). These are not future features. They exist now, on devices most people already own, often for free.


It Is Not Perfect. That Is Fine.

Lighting can throw it off. Unusual angles, reflective surfaces, cluttered backgrounds. AI can occasionally describe something with great confidence that is only partially accurate, which is its own special kind of entertaining. Facial recognition has well-documented accuracy issues for certain demographic groups, and that matters and needs to keep improving.


But assistive technology has never needed to be perfect. A white cane does not tell you what is around the corner. A screen reader does not make a badly designed website any less painful to use. The bar is not perfection. The bar is whether the tool helps. Whether it increases independence, reduces the need to rely on others for small things, and makes daily life a bit less effortful. By that measure, visual AI is already doing its job, and it is getting better.


What This Means for Workplaces

Organisations that employ people with low vision have traditionally needed to invest in specialist hardware and bespoke software. That still matters. But increasingly, some of the most powerful accessibility tools available to an employee with low vision are already on their phone. Built into iOS and Android. Free or close to it.


Which raises a practical question for managers, HR teams, and anyone responsible for inclusion at work. Do you know what is out there? Are your workplace adjustment conversations keeping up with what is now possible? And is your environment one where someone using their phone as an accessibility tool feels comfortable doing so, without having to stop and explain themselves?


Worth noting too: visual AI works best in environments that are well lit, clearly signed, and logically laid out. Which is, conveniently, also just a description of a well-designed workplace. Good design and good technology work better together than either does alone.


The Bigger Picture

Living with low vision has largely meant managing absence. The absence of detail, of legible text, of recognisable faces. People adapt. They develop workarounds, ask for help, lean on memory. And they do it quietly, because the alternative is constantly drawing attention to something they would rather just get on with.


Visual AI is not fixing eyesight. But it is providing the information that eyesight would have given. It is describing the scene. Reading the label. Confirming who just walked in. In a quiet, sometimes slightly imperfect, genuinely useful way, it is giving context back.


That is not a small thing. And as these tools keep improving, the question worth asking is whether the conversations around accessibility are keeping pace. Whether organisations and individuals know what is available. Whether we are making it easy for people to find and use tools that can genuinely make a difference.


If you have never tried one of these apps, I genuinely recommend it. Point your phone at something and see what it says. It takes about four minutes, and it might just shift how you think about what low vision actually means to live with day to day, and what is now possible that was not five years ago.


💥 Small change. Big impact.

Comments


bottom of page