A Whole New Vision

On how the Vision Pro could let us see the world through others’ eyes

February 10, 2024

No, I don’t have a Vision Pro. I likely won’t be getting one anytime soon. Early reviews and videos certainly intrigue me. Especially as a designer. Especially for the totally new interaction and UI paradigms to explore.

It’s been on my mind quite a bit lately as several nerds I follow (and a couple friends) have gotten one and toyed around with it. The other day, though, I had a thought about Vision Pro that’s got me really intrigued about a potential future use that’s probably a LOT different than what *most* people are talking about right now.

A press image of the Apple Vision Pro, looking inside to show two screens displaying a landscape photo with a mountain in the background and a lake in the foreground.
The Vision Pro is a fancy enclosure around some very high-fidelity screens. Credit: Apple PR

Some context from left field

Late in 2022, my mom had a stroke. She’s doing well now, but it impacted an area of her brain that affects her vision.

A simple illustration of two eye-shaped frames with a pick comb in the middle of each.
What a non-impaired eye might see when looking at a simple pick comb on a light blue background.

She has a permanent condition known as left visual field cut (or hemianopsia). This condition means that the entire left side of her visual field has ceased to exist for her. This doesn’t mean that her left eye doesn’t work. Both eyes can see … just not anything on the left half of their normal field of view.

A simple illustration of two eye-shaped frames with a pick comb in the middle of each, the left half of each eye is covered in black.
What we assume someone with left visual field cut might see.

In practice, this can be quite trippy to consider. When I hold up fingers directly in front of her face, just to her left side of her nose, she can’t see them to count them. When she looks at a sheet of paper close enough to her face to read, she can’t read all the way across unless she physically turns her head to the left to find the edge of the page.

A simple illustration of two eye-shaped frames with an octopus in each, 8 tentacles stretching out to the right.
What the human brain might see when only half of the information is present, making its best guess to fill in the blanks.

But what’s really mind-bending about this is that she doesn’t see blankness in that area. She doesn’t recognize that it’s missing at all. Her brain seems to auto-complete with what it expects. (All humans do this all the time, particularly with things in our periphery.) I’ve started likening this to AI though, like our brains are making their best guess toward what could be there and filling it in for us on the fly. In some cases, this can lead to weird “hallucinations.” She might look at the side of a chair and see a couch (expecting the left side to continue much farther). Or the teeth of a comb and see an octopus because the other half of the comb is in her left field and the teeth on the right resemble tentacles.

Back to the Vision Pro

The interesting thing that wiggled into my brain thinking about the Vision Pro is that it is essentially replacing our view of the world. It’s a smaller field of view to be sure (100 degrees vs. 180ish degrees), but the screens inside present an instantaneous view of the world filtered through the software of the Vision Pro computer.

So what if we could manipulate that view? I know right now that’s protected, for VERY good reason, and not something the average developer can just manipulate. But, what if we could? Could we, for instance, remove the left field from the view and send only the right half to the full screen of a normal user? Would that help me (or any person without left visual field cut) see and understand the world that my mom sees? Conversely, could we take the full 180 degree view and show it on only the right side of the panel, so that someone with left visual field cut would have the entire range of vision in front of them?

With software and screens directly in front of our eyes, in theory we could. Going beyond that, could we use software to color adjust and show the world as those with color blindness see? As people with other visual impairments see?

Would such opportunities help people to understand and feel more empathy toward those with visual impairments? Would it help drive more meaningful accessibility efforts? Could it actually address some visual impairments for people? I don’t know the answers, but I can’t wait to find out!