While I admittedly have no experience with the blind or the methods taught to them that assist them in navigating the world, I have a hunch there's potential to leverage the current LLM/ AI stack to improve upon those methods.
Are their any cool companies or open source projects experimenting with this?
https://www.ayes.ai
The leader in the field is BeMyEyes, of course. They've been working with Microsoft to integrate GPT-4o vision models into their app, with some great success. What we haven't seen yet is the move to live-video image recognition that could come from something like an OrCam or Meta glasses (they recently announced a partnership with Meta). I'm guessing there are serious safety issues with the model missing important information and leading someone vulnerable astray.
https://www.bemyeyes.com https://www.bemyeyes.com/blog/be-my-eyes-meta-accessibility-...
OrCam has a new product (woe upon those of us who have the paltry OrCam MyEye2) that the Meta glasses will be competing against at an eye-watering > $4K price point, that seems to do less.
https://www.orcam.com/en-us/orcam-myeye-3-pro
As with the hearing aid industry which recently went over-the-counter causing prices to plummet, the vision aid product category is in temporary disarray as inexpensive new technologies makes their way into a premium-price market.
Thanks for all the info this is very informative! Rarely do I root for Meta but they do seem to be in the best position to create affordable tools that are also safe. It really needs to be 100% as there's no room for hallucinations when you're relying on it to get you across the street safely.
Anyways this is all very exciting and definately makes me a little more enthusiastic about the inevitable integration of these models into everyday life.
AR + AI for visual aid / accessibility. Do it somebody.