eBraille is a conceptual eye-glasses that enable visually impaired persons to read and navigate their surroundings without assistance. It is a real-time image analytic glass designed for the visually impaired.
This device is capable of analysing an image to get information from it in real time. eBraille can detect and extract text or handwritten letters and digits in notes, newspapers, signboards, marker boards or any other source using Optical Character Recognition (OCR) and converts this extracted text to speech.
The device also detects objects, products, and persons from images and communicates this information via speech to the user. The software (Image analytics) runs on-premise and doesn't require an internet connection, eliminating delays from cloud connectivity issues.
The idea for eBraille was born during a hardware hackathon I chaired in 2018. The hardware consists of a powerful processor capable of running image processing algorithms, four high-resolution cameras, a rechargeable battery, all embedded in stylish eyeglasses, and a Bluetooth earpiece for speech communication.
Screenshot of the completed 3D modeled eBraille before rendering
Initial ideation during a hardware hackathon, focusing on assistive technology for visually impaired individuals. Brainstorming sessions to identify key features and technical requirements.
Defined the hardware components including processor requirements, camera specifications, battery life, and connectivity options. Established software requirements for OCR and object detection.
Created detailed 3D models of the glasses design, focusing on ergonomics and aesthetics. Developed functional prototypes to test core features and user interaction.
Camera design concept with 12MP resolution
Charging port design with magnetic connection
Bluetooth earpiece for audio feedback
Concept rendering of eBraille in use