In what could turn out to be a major breakthrough in accessibility, a Cornell assistant professor has created a wearable accessory inspired by a necklace camera which can track skin movement on the neck and face without any actual sound input. The device is actually a third-generation innovation that builds on two other notable projects. The first was C-Face, an earphone that studies cheek contours to track facial expressions and translates them into 3D emojis and voice commands. C-Face’s 2020 showcase as an unobtrusive facial expression tracking device was followed by another interesting invention called NeckFace in August last year.
NeckFace was a wearable collar-like sensing device that deployed infrared cameras to capture an image of the chin and face below the neck. The idea was to deploy its core technology for applications such as viewing video without a front camera, silent voice recognition and facial expression tracking in virtual reality. The idea resembles Apple’s rumored AR/VR headset that would allow users to attend FaceTime calls through their Memoji avatar. Now the mind behind C-Face and NeckFace is back with another promising device.
VoiceChin is a voice recognition device that removes the voice aspect of voice commands. The brainchild of Cheng Zhang, assistant professor of information science at Cornell Ann S. Bowers College of Computing and Information Science, SpeeChin adopts the collar design and relies on an infrared camera to study deformities that appear on the face and the neck when an individual speaks. Apple also uses an infrared camera in its TrueDepth system on iPhones to generate a 3D face map for authentication and the creation of Animojis.
Understated, but significant
One of the main purposes of SpeeChin is that it allows a person with a voice impairment to simply imitate the task of speaking a voice command, and the device will actually follow the movements to understand what was said without input. audible. “This device has the potential to learn a person’s speech patterns, even with quiet speech,” Zhang notes. Another use case scenario is when someone wants to invoke Siri to complete a task, but the environment isn’t suitable for speaking loudly to their phone or speaker. What’s really impressive is that Zhang built the device himself at home while attending to his academic duties remotely. It’s remarkable how promising technology like this can be so simple in its engineering, and potentially more affordable, compared to the thousands of dollars than gadgets like Facebook’s Project Cambria headset or Apple’s AR/VR headset. ‘Apple will charge.
SpeeChin comes with an infrared camera glued to a 3D printed collar platform, and a coin has been attached to the base for added stability. Another promising aspect of SpeeChin’s design is privacy. Unlike Snap’s Spectacles glasses or Facebook’s Ray-Ban Stories glasses which record everything (and everyone) in the camera view, SpeeChin’s suspension orientation is only suitable for recording facial expressions of the person wearing the smart collar. In testing, SpeeChin recognized English and Mandarin commands with 90.5% and 91.6% accuracy, although accuracy decreased when the subject walked. It’s unclear if Zhang has found a partner to commercialize the idea, or if the project will be open-source, but the cameraThe device based on is ingenious and would benefit from wide distribution.
Next: Watch a cyborg fish powered by human heart cells swim
Apple may soon let you subscribe to an iPhone
About the Author