Machines that can convert spoken words into digital form for computer storage and processing are known as voice-input devices. If you stop to think about how complicated the interpretation of a spoken word can be for humans, you can realize how tricky it is to design a voice-input device to do essentially the same thing.
Two people may pronounce the same word differently because of accents, personal styles of speech, and the unique quality of each person’s voice.
Even the same person can pronounce words differently at various times depending on his or her health or level of anxiety. Moreover, in listening to others, we not only ignore irrelevant background noises but decode complex grammatical constructions as well as sentence fragments.
ADVERTISEMENTS:
Equipment designers have tried to overcome these obstacles in a number of ways. Voice-input devices are designed to be “trained” by users, who repeat words until the machines know their voices.
These devices can also screen out background noise. Unfortunately, most voice-input devices can recognize only a limited number of isolated words and not whole sentences composed of continuous speech.
Thus, the complexity of the messages to which they can respond is quite limited. Still, the possible applications of this technology are exciting. In fact, its potential is probably far greater than that of voice output, a more mature technology at this time.
ADVERTISEMENTS:
Imagine yourself speaking into a microphone while a printer automatically types your words. Such as system is in fact commercially available today; however, the number of words the computer can “understand” is extremely limited, typically under 1,000.
Experts say that voice-actuated word processing will require recognition of at least 10,000 words.