Exploring the Accessibility Framework Android

The Android SDK includes numerous features and services for the benefit of users with visual and hearing impairments. Those users without such impairments also benefit from these features, especially when they are not paying complete attention to the device (such as when driving). Many of the most powerful accessibility features were added in Android 1.6 and 2.0, so check the API level for a specific class or method before using it within your application. Some of the accessibility features available within the Android SDK include

  • The Speech Recognition Framework.
  • The Text-To-Speech (TTS) Framework.
  • The ability to enable haptic feedback (that vibration you feel when you press a button, rather like a rumble pack game controller) on any View object (API Level 3 and higher). See the setHapticFeedbackEnabled() method of the View class.
  • The ability to set associated metadata, such as a text description of an ImageView control on any View object (API Level 4 and higher).This feature is often very helpful for the visually impaired. See the setContentDescription() method of the View class.
  • The ability to create and extend accessibility applications in conjunction with the Android Accessibility framework. See the following packages to get started writing accessibility applications: android.accessibilityservice and android.view.accessibility.There are also a number of accessibility applications, such as KickBack, SoundBack, and TalkBack, which ship with the platform. For more information, see the device settings under Settings, Accessibility.

Because speech recognition and Text-To-Speech applications are all the rage, and their technologies are often used for navigation applications (especially because many states are passing laws making driving while using a mobile device without hands-free operation illegal), let’s look at these two technologies in a little more detail.

Android applications can leverage speech input and output. Speech input can be achieved using speech recognition services and speech output can be achieved using Text-To-Speech services. Not all devices support these services. However, certain types of applications—most notably hands-free applications such as directional navigation—often benefit from the use of these types of input.

Speech services are available within the Android SDK in the android.speech package. The underlying services that make these technologies work might vary from device to device; some services might require a network connection to function properly.

Leveraging Speech Recognition Services

You can enhance an application with speech recognition support by using the speech recognition framework provided within the Android SDK. Speech recognition involves speaking into the device microphone and enabling the software to detect and interpret that speech and translate it into a string. Speech recognition services are intended for use with short command-like phrases without pauses, not for long dictation. If you want more robust speech recognition, you need to implement your own solution.

On Android SDK 2.1 and higher, access to speech recognition is built in to most popup keyboards. Therefore, an application might already support speech recognition, to some extent, without any changes. However, directly accessing the recognizer can allow for more interesting spoken-word control over applications.

You can use the android.speech.RecognizerIntent intent to launch the built-in speech recorder. This launches the recorder, allowing the user to record speech.

Recording speech with the RecognizerIntent.

Recording speech with the RecognizerIntent.

The sound file is sent to an underlying recognition server for processing, so this feature is not really practical for devices that don’t have a reasonable network connection. You can then retrieve the results of the speech recognition processing and use them within your application. Note that you might receive multiple results for a given speech segment.

The following code demonstrates how an application could be enabled to record speech using the RecognizerIntent intent:

public class SimpleSpeechActivity extends Activity
{
private static final int VOICE_RECOGNITION_REQUEST = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
public void recordSpeech(View view) {
Intent intent =
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_PROMPT,
“Please speak slowly and clearly");
startActivityForResult(intent, VOICE_RECOGNITION_REQUEST);
}
@Override
protected void onActivityResult(int requestCode,
int resultCode, Intent data) {
if (requestCode == VOICE_RECOGNITION_REQUEST &&
resultCode == RESULT_OK) {
ArrayList<String> matches = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
TextView textSaid = (TextView) findViewById(R.id.TextSaid);
textSaid.setText(matches.get(0));
}
super.onActivityResult(requestCode, resultCode, data);
}
}

In this case, the intent is initiated through the click of a Button control, which causes the recordSpeech() method to be called. The RecognizerIntent is configured as follows:

  • The intent action is set to ACTION_RECOGNIZE_SPEECH in order to prompt the user to speak and send that sound file in for speech recognition.
  • An intent extra called EXTRA_LANGUAGE_MODEL is set to LANGUAGE_ MODEL_ FREE_ FORM to simply perform standard speech recognition. There is also another language model especially for web searches called LANGUAGE_ MODEL_ WEB_ SEARCH.
  • An intent extra called EXTRA_PROMPT is set to a string to display to the user during speech input.

After the RecognizerIntent object is configured, the intent can be started using the startActivityForResult() method, and then the result is captured in the on Activity Result() method. The resulting text is then displayed in the TextView control called TextSaid. In this case, only the first result provided in the results is displayed to the user. So, for example, the user could press the button initiating the recordSpeech()

The text string resulting from the RecognizerIntent

The text string resulting from the RecognizerIntent

Leveraging Text-To-Speech Services

The Android platform includes a TTS engine (android.speech.tts) that enables devices to perform speech synthesis. You can use the TTS engine to have your applications “read” text to the user. You might have seen this feature used frequently with location-based services (LBS) applications that allow for hands-free directions. Other applications use this feature for users who have reading or sight problems. The synthesized speech can be played immediately or saved to an audio file, which can be treated like any other audio file.

For a simple example, let’s have the device read back the text recognized in our earlier speech recognition example. First, we must modify the activity to implement the TextToSpeech.OnInitListener interface, as follows:

public class SimpleSpeechActivity extends Activity
implements TextToSpeech.OnInitListener
{
// class implementation
}

Next, you need to initialize TTS services within your activity:

TextToSpeech mTts = new TextToSpeech(this, this);

Initializing the TTS engine happens asynchronously. The TextToSpeech.OnInitListener interface has only one method, onInit(), that is called when the TTS engine has finished initializing successfully or unsuccessfully. Here is an implementation of the onInit() method:

@Override
public void onInit(int status) {
Button = readButton = (Button) findViewById(R.id.ButtonRead);
if (status == TextToSpeech.SUCCESS) {
int result = mTts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.e(DEBUG_TAG, “TTS Language not available.");
readButton.setEnabled(false);
} else {
readButton.setEnabled(true);
}
} else {
Log.e(DEBUG_TAG, “Could not initialize TTS Engine.");
readButton.setEnabled(false);
}
}

We use the onInit() method to check the status of the TTS engine. If it was initialized successfully, the Button control called readButton is enabled; otherwise, it is disabled. The onInit() method is also the appropriate time to configure the TTS engine. For example, you should set the language used by the engine using the setLanguage() method. In this case, the language is set to American English. The voice used by the TTS engine uses American pronunciation.

Finally, you are ready to actually convert some text into a sound file. In this case, we grab the text string currently stored in the TextView control (where we set using speech recognition in the previous section) and pass it to TTS using the speak() method:

public void readText(View view) {
TextView textSaid = (TextView) findViewById(R.id.TextSaid);
mTts.speak((String) textSaid.getText(),
TextToSpeech.QUEUE_FLUSH, null);
}

The speak() method takes three parameters: the string of text to say, the queuing strategy and the speech parameters. The queuing strategy can either be to add some text to speak to the queue or to flush the queue—in this case, we use the QUEUE_FLUSH strategy, so it is the only speech spoken. No special speech parameters are set, so we simply pass in null for the third parameter. Finally, when you are done with the TextToSpeech engine (such as in your activity’s onDestroy() method), make sure to release its resources using the shutdown() method:

mTts.shutdown();

Now, if you wire up a Button control to call the readText() method when clicked, you have a complete implementation of TTS. When combined with the speech recognition example discussed earlier, you can develop an application that can record a user’s speech, translate it into a string, display that string on the screen, and then read that string back to the user. In fact, that is exactly what the sample project called SimpleSpeech does.



Face Book Twitter Google Plus Instagram Youtube Linkedin Myspace Pinterest Soundcloud Wikipedia

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Android Topics