Monday, November 16, 2015

Breaking a different kind of language barrier: Sign language becomes sensor-based



American Sign Language is the bridge that connects deaf and hard-of-hearing people, in large part, to the world of traditional interpersonal communications. But how to communicate with ASL when a partner in a given conversation cannot interpret the visually-based language?
Seeking to close that kind of communications gap, work is underway at Texas A&M University. Roozbeh Jafari, associate professor and principal investigator with the school’s Department of Biomedical Engineering — and researcher at its Centre for Remote Health Technologies and Systems — is developing a newly sophisticated tool to make ASL understandable to everyone.
The results of Jafari’s project, and the long-term implications that stem from it, could change the way we approach interfacing with each other — and even with technology — all based on our hands, muscles and movements.

Vision Quest: Recent Challenges for ASL Translation

The ASL translation system doesn’t have an official name, yet, but what it’s doing — and what it stands to do — is concrete and apparent. The goal is to translate ASL for all participants in a way that proves more accurate, more portable and more reliable than ever before.
“There have been a few systems for translating American Sign Language automatically,” said Jafari, regarding devices that precede the new technology he is working to refine. “The most prominent among them have been based on cameras and vision … you would basically stand in front of a camera and the camera would track hand motion.”

It is a system of turning visually tracked movement into words. But the cameras that facilitate it, only work well when the specific ASL gestures tracked are precise enough for the computer on the other end of the equation to recognise. Failure to hit the mark precisely, however, can mean the conversation between an ASL user and the non-ASL-using participant becomes difficult. Words get lost. Communications break down. Add in challenges around where a camera can be placed in a room full of ASL-using participants. Further add that users have to carry around a motion-tracking camera everywhere.
In all of these factors, Jafari saw the need for a different ASL-interpreting tool.

Beyond Vision: Jafari’s Motion- and Muscle-Tracking Approach to ASL Translation

In Jafari’s project, the camera is out of the picture. Instead, his technology applies an external motion sensor and a wearable muscle-tracking sensor to create a new version of ASL translation.
“The sensor is based on EMG, or electromyogram technology,” Jafari said, referring to sensors the Mayo Clinic describes as measuring electrical signals — ones that our motor neurons transmit to muscles, causing them to contract. EMGs turn can these signals into numerical values computers and specialists are able to interpret.
“Combined with the external motion sensors, which show us the overall hand movement, the EMG allows us to discriminate between gestures,” he said. “A fine-grain of interpretation … motion sensors give us the overall sense and muscle activities give us information about the fine-grained intent.”

Next Steps: Focusing on the Details of New ASL Tech

The team has produced an operational proof-of-concept model, when it comes to the ASL-interpreting technology underway at Texas A&M. The next step is to refine the sensitivity and accuracy of the devices.
  • Currently, every wearer of the EMG sensor, every time they don the device, must be careful to position the wearable tech in a precise way, otherwise the system must be “retrained” to register the ASL vocabulary the user employs. Jafari also stated that they’re working on ways to “make the system smarter, in a sense … to reduce or eliminate training time.”
  • At present, Jafari’s system recognises individual words, but requires a pause between them. As the team develops their work further, the goal is for the translation engine to combine the input it receives into whole phrases and sentences — more akin to the way humans naturally communicate.
  • The third prong of development is to increase the vocabulary of the technology, overall.
When all of Jafari’s developing tech is operating at the advanced level he describes, ASL users and their conversation partners will clearly benefit. But the applications of the sensor-based system extend beyond sign language and translation alone.
“When you think about it, you can use this for many other applications,” he said. “Think about your house … you might have a smart house, but right now to turn on and off all your devices you need to go to your mobile phone, each app, and then use them. What if you could control your house with hand gestures, communicating with each of your devices?”
For Jafari, starting with the mission to further facilitate ASL among all participants — and then extending into the home and future applications — the conversation is just getting underway.