Deaf 911: Equal Access to 911 from Mobile Phones for the Deaf Community

February 24, 2009

Latest Update: See the research page for Deaf 911 for the latest details

This is a short writeup of some of the work I’ve been doing recently. I want to start this post with a story based on an actual event to highlight the problem we are tackling:

Bob, a deaf man, was making a late night grocery run and was heading to his car. Before he could reach the car a mugger stopped him and threatened him with a knife. Even though Bob was carrying a gun, he handed over his wallet, but the mugger stabbed him anyway. Bob, in self-defense, shot the mugger. Bob realized that emergency help was required, but could not simply call 911, instead he SMS’d his hearing friend. This friend called 911 and related the problem to the 911 operator. This resulted, as per policy, in emergency help showing up at his friend’s location instead of the Bob’s. By the time the emergency help was redirected to Bob’s location, the mugger had lost too much blood and could not be resuscitated.

Existing Practices

Deaf people make heavy use of SMS (text messages) to communicate with other people when mobile. However, 911 centers currently do not have the resources to support these technologies. Even if it were possible, it is not desirable for SMS to be the primary means of communication with 911 for a number of reasons, such as:

  1. SMS can not be located in the same way that voice calls to 911 can.
  2. SMS messages are not necessarily delivered to the receiver as soon as they are sent (i.e. it is store-and-forward).
  3. SMS senders can not be tracked by existing 911 cell-tower based location systems.
  4. SMS communication happens on a message by message basis and not immediately as the character is typed, unlike voice where every sound is sent as soon as it is uttered.

In 1990, the Americans with Disabilities Act mandated that all 911 centers be able to communicate with TDDs (telecommunications device for the deaf) to provide real-time access to 911 services for deaf people. TDDs make use of 1400 and 1800Hz tones to encode text as a series of bits according to the Baudot system. With the explosion of mobile phones, came policies which required mobile phone manufacturers to produce phones which were compatible with mobile TDDs and could reliably send Baudot coded signals to 911 centers. This allowed deaf people to access 911 services even when mobile if they had a mobile TDD. However, very few if any deaf people carry both their cellphone as well as a separate TDD device given that text based messaging systems are the primary form of communication for the deaf community.

Our Solution

Our solution is to develop software for the mobile phone that does everything the mobile TDD would do. This includes encoding user-entered text according to the Baudot system and decoding the incoming audio to show text to the user. To do this requires direct access to the incoming/outgoing voice streams of a phone in real-time. Encoding text entered into the appropriate sound is performed by playing pre-generated audio files for each letter, digit and symbol allowed by Baudot. Because these encodings are fixed there is no need to have software generate the audio at runtime. Decoding is performed using the Goertzel algorithm. Because we already know which tones (1400 and 1800Hz) we are looking for, the Goertzel algorithm allows us to detect the tones faster than the more general FFT.

Due to security concerns of modern handset makers almost all modern phones prevent a software developer from accessing the voice stream of an active phonecall. Given this restriction, our current system has been implemented on the OpenMoko mobile phone. The OpenMoko is a open-source hardware and software platform. The Linux based software platform provides us full access to the voice stream. Our prototype decoder is able to perform at 2x realtime. We have tested our prototype with a traditional acoustic coupler and Georgia Tech’s 911 center, with their much appreciated cooperation, and achieved high accuracy rates.

NENA 2009 TDC/ODC

Dr. Thad Starner and I were invited to attend the NENA 2009 TDC/ODC where Thad gave a talk on this to an audience of policy makers, equipment manufacturers and 911 center operators. During the talk I performed a live demo with the help of the Georgia Tech 911 center. We also brought along our acoustic coupler and were able to have a running demo during the break for attendants to try out. Thad muffled the acoustic coupler’s handset rest with his fleece coat which helped mitigate the fact that we were talking while demoing.

Thanks to help from Richard Ray, Paul McLaren and Steve O’Conor, we were able to spend a considerable amount of time at the Orlando 911 center and were able to gather data that will help us further improve our prototype. We were also able to talk with many manufacturers of the 911 center equipment and will hopefully be able to procure a standard system to use for testing in the lab.

Future Work

Our current system is only a prototype and is therefore not ready for full deployment. We would like to gather more data from different types of 911 center equipment and use this to further tune our decoding algorithm. Additionally, we would like to work with handset manufacturers and recreate our application on mobile phones that are more popular in the deaf community such as the Sidekick. This would allow us to perform more user studies by leveraging the equipment that deaf people are already comfortable with using.

permalinkarchive

View Comments