Situation:
Vision impaired and blind people currently have limited options to compensate for loss of normal vision (e.g. guide dogs for walking navigation, book/label/signage-to-voice reading devices, limited capability scene description devices).
Target:
Leverage current and emerging Wyze AI capabilities to provide a wearable real-time, intuitive vision augmentation/replacement product that seamlessly delivers navigation, reading, scene description+changes-in-progress, and voice and/or facial recognition capabilities in a continuous stream of verbal feedback that can be steered both by the user and automatically by emerging conditions to focus on more detail in immediate areas of need or interest. The output of this product would be similar to listening to a radio broadcast of a baseball game, leveraging AI’s capability to summarize as well as drill down on details.
Verbal output options could include a single earbud or bone conduction transducer, leaving concurrent native hearing capability intact for safety.
User steering input options could include a combination of microphone for verbal commands, handheld button-type wireless controller, inward-facing eyeglasses frame cameras for facial prompts (e.g. left & right eye wink patterns).
Automatic steering input options could include sounds (e.g. fire alarms), scene condition changes (e.g. person approaching+recognition, user moving the camera POV, and user starting to move in a direction (e.g. automatically initiating near-field navigation assistance).