Hearing

views updated May 21 2018

HEARING

The auditory system changes as a consequence of the aging process, as well as a result of exposure to environmental agents and disease. The cumulative effect of these factors over the life span is a significant hearing loss among a large proportion of adults aged sixty-five years and older. Hearing loss associated exclusively with the aging process is known as presbycusis. Deterioration of the auditory system with age leads to changes not only in hearing sensitivity, but also to a decline in processing of speech stimuli, particularly in less-than-ideal listening environments. However, there is large individual variability in the auditory abilities of older people, as well as substantial gender differences in auditory performance. Thus, generalizations about hearing in aging and the impact of decline in the auditory sense must be considered with an understanding of the range of individual differences that may occur.

Prevalence of hearing loss

Hearing loss is the fourth most common chronic health condition reported by individuals who are sixty-five years and over (National Center for Health Statistics). Among males, 33 percent aged sixty-five to seventy-four years, and 43 percent aged seventy-five years and over report a hearing impairment. Comparable figures for females are 16 percent and 31 percent for sixty-five to seventy-four year olds and those seventy-five and older, respectively. A much higher prevalence rate of hearing loss among older people is reported in studies that actually test hearing sensitivity in the population. For example, 83 percent of the 2,351 participants in the Framingham Heart Cohort Study, ages fifty-seven to eighty-nine years, had some degree of hearing loss at one frequency in the speech range (Moscicki, Elkins, Baum, and McNamara). Using a criterion of average hearing sensitivity exceeding 25 decibels Hearing Level (dB HL) as indicating significant hearing loss, the overall prevalence rate in large population-based studies of older adults is about 46 percent.

Source of hearing problems and effects on the auditory system

The principal causes of significant hearing loss among older people are noise exposure, disease, heredity, and senescence. Exposure to industrial noise exceeding 90 dBA for an eight-hour workday over a period of time is known to cause permanent, high-frequency hearing loss. Additionally, a single exposure to very intense sound (exceeding 130 dBA) can also cause a permanent hearing loss that affects either all frequencies or selective, high frequencies. Diseases specific to the ear that affect adults include otosclerosis, Meniere's disease, and labyrinthitis. More than one hundred different abnormal genes causing sensorineural hearing loss have been identified. Although hereditary hearing loss accounts for about 50 percent of congenital childhood deafness, it is also thought to play a role in progressive hearing loss during later adulthood (Fischel-Godshian). At least one report describes a strong family pattern of presbycusis, particularly in women (Gates et al.). Finally, age-related deterioration of structures in the auditory system appears to occur among individuals with no significant history of noise exposure, otologic disease, or familial hearing loss.

The auditory system is housed within the temporal bone of the skull and consists of the outer ear, middle ear, inner ear, nerve of hearing (N. VIII), and central auditory nervous system. Evidence from anatomical studies of temporal bones and physiologic studies of auditory system function in older individuals suggest that age-related changes can occur at each level of the auditory system.

The outer ear consists of the pinna and the ear canal, which collect and amplify acoustic energy as it is transferred toward the tympanic membrane (eardrum). Changes commonly observed in the outer ear of older individuals include an enlargement of the pinnae, an increase in cerumen (earwax) production in the ear canal, and a change in the cartilage support of the ear canals. These factors can affect the sound field-to-eardrum transfer function and thereby alter sound transmission that is received at the tympanic membrane. Excessive cerumen, found in approximately 40 percent of an elderly population, can add a slight-to-mild high frequency conductive overlay to existing hearing thresholds.

The middle ear contains the three tiny bones, or ossicles (malleus, incus, and stapes), that are linked together as the ossicular chain. The principal function of the middle ear is to transmit acoustic energy effectively from the ear canal to the inner ear without an energy loss. The two middle ear muscles, the tensor tympani and stapedius, contract in response to loud sound to protect the inner ear from damage. With aging, the ligaments, muscles, and ossicles comprising the ossicular chain may degenerate, presumably causing a conductive hearing loss. Electrophysiologic measures of middle ear function (tympanometry) further indicate that the middle ear stiffens with age, thereby reducing the transmission of acoustic energy through the middle ear (Wiley et al., 1996).

The inner ear is composed of a fluid-filled bony labyrinth of interconnected structures including the cochlea. The cochlea contains the sensory end organ for hearing (the organ of Corti), which supports the inner and outer hair cells. These microscopic sensory hairs are essential for processing sound. The cochlea analyzes the frequency and intensity of sound, which is transmitted to the nerve of hearing by the inner hair cells. At the same time, the outer hair cells initiate a feedback mechanism resulting in the presence of acoustic energy in the ear canal (otoacoustic emissions). One prominent change in the inner ear with age is a loss of inner and outer hair cells in the basal turn of the cochlea (Schuknecht). Age-related loss of inner hair cells in this region produces a high frequency hearing loss and has been called sensory presbycusis. The loss of outer hair cells is expected to alter the feedback mechanism, possibly causing hearing loss and limited capacity to finely tune the frequency of sound. Electrophysiologic measures of outer hair cell function indicate that thresholds of otoacoustic emissions increase linearly with increasing age, although this age effect is confounded by the presence of hearing loss among older subjects (Stover and Norton). Another prominent change in the inner ear with aging is a decrease in the volume of vascular tissue, the stria vascularis, lining the outer cochlear wall. The stria vascularis maintains the chemical balance of the fluid in the cochlea, which in turn nourishes the hair cells. A loss of the vascular tissue produces a permanent hearing loss affecting most frequencies, called strial presbycusis (Schuknecht, 1993).

Approximately thirty-five thousand neurons comprise the afferent auditory branch of the eighth cranial nerve (N. VIII) in young, healthy adults. The auditory branch of N. VIII recodes the frequency, intensity, and timing information received from the hair cells and transmits it to the nuclei of the central auditory nervous system. With age, there is a loss of auditory neurons that accumulate over the life span. Considerable evidence demonstrates that the neuronal population comprising the auditory nerve is markedly reduced in aged human subjects compared to younger subjects. The effect on hearing, called neural presbycusis, is a mild loss of sensitivity but a considerable deficit in discriminating attributes of sound, including speech.

The nuclei of the central auditory nervous system transmit acoustic signals to higher levels, compare signals arriving at the two ears, recode the frequency of sound, and code other characteristics of the temporal waveform. Final processing of acoustic information is carried out in the primary auditory cortex, located in the superior temporal gyrus. There is a substantial reduction in the number of neurons in each nucleus of the central auditory nervous system with age, with the most prominent decline occurring in the auditory cortex (Willott). These alterations are thought to affect processing of complex acoustic stimuli, including distorted speech signals and sequences of tonal patterns.

Auditory performance

Hearing sensitivity decreases with increasing age among both men and women. A longitudinal study of hearing thresholds among individuals screened for noise exposure, otologic disease, and hereditary hearing loss showed that hearing thresholds decline progressively above age twenty years in men, and above age fifty years in women (Pearson et al.). The decline in hearing thresholds of the men was more than twice as fast as that of the women, at certain ages. Women showed the greatest decline in hearing sensitivity in the low frequencies, whereas men showed the greatest decline in the higher frequencies. For the unscreened population, the average thresholds of older men, sixty-five years of age, show normal hearing sensitivity in the low frequencies, declining to a moderate hearing loss (42 dB HL) at 3000 cycles per second (Hz) and above (Robinson). For women, the average hearing thresholds at age sixty-five years indicate a slight hearing loss (1625 dB HL) from 500 through 4000 Hz, and a mild hearing loss (30 dB HL) at 6000 Hz. The type of hearing loss typically is sensorineural, indicating that the site of lesion is the sensory mechanism of the inner ear or the nerve of hearing.

Hearing sensitivity in the ultra high audio-metric frequencies, above 8000 Hz, shows an age-related decline beginning in middle age that is greater than the decline in the lower audiometric frequencies (2508000 Hz) (Wiley et al., 1998). These extended high-frequency thresholds are highly correlated with thresholds at 4000 Hz and 8000 Hz, suggesting that early monitoring of extended high-frequency thresholds among young and middle-aged adults may be useful for predicting the onset of presbycusis and for recommending preventive measures.

The ability to detect changes in temporal (timing) characteristics of acoustic stimuli appears to decline with age. Gap detection is the ability to detect a brief silent interval in a continuous tonal stimulus or noise, and reflects the temporal resolving power of the ear. Elderly listeners generally show longer gap detection thresholds than younger listeners (Schneider and Hamstra). Older listeners also require longer increments in tone duration to detect a change in a standard tone duration, compared to younger listeners (Fitzgibbons and Gordon-Salant, 1994). Finally, older listeners' performance for discriminating and identifying tones in a sequence is poorer than that of younger listeners, for tones of equivalent duration (Fitzgibbons and Gordon-Salant, 1998). Taken together, these findings indicate that older listeners have limited capacity to process brief changes in acoustic stimuli. This limitation could affect discrimination of the rapid acoustic elements that comprise speech.

Older people demonstrate difficulty understanding speech. In quiet listening environments, the speech recognition problem is attributed to insufficient audibility of the high-frequency information in speech by older people with age-related, high-frequency hearing loss (Humes). Substantial difficulty recognizing speech in noise also characterizes the performance of older listeners. Some studies have shown that the difficulties in noise are largely associated with the loss of sensitivity (Souza and Turner); other studies suggest that there is an added distortion factor with aging that acts to further diminish performance (Dubno, Dirks, and Morgan). The findings in noise are highly variable across studies and are largely dependent upon the speech presentation level, type of speech material (i.e., nonsense syllables, words, sentences), and availability of contextual cues.

In everyday communication situations, speech can be degraded by reverberant rooms and by people who speak at a rapid rate. Reverberation refers to a prolongation of sound in a room, and causes elements of speech to mask later-occurring speech sounds and silent pauses. With rapid speech, there is a reduction in the duration of pauses between words, vowel duration, and consonant duration. Time compression is an electronic or computer method to simulate rapid speech. Age effects are evident for recognition of both reverberant and time-compressed speech, which are independent and additive to the effects of hearing loss (Gordon-Salant and Fitzgibbons). Moreover, multiple speech distortions of reverberant and time-compressed speech, or either time-compressed or reverberant speech in noise, are excessively difficult for older people. Because both of these types of distortions involve a manipulation of the temporal (time) speech waveform, the recognition problem of older people may reflect a deficit in processing the timing characteristics of sound. An alternative hypothesis is that an age-related cognitive decrement in rapid information processing limits the older person's ability to process speech presented at a fast rate (Wingfield et al.). It should be noted, however, that older people are able to perform quite well on many speech recognition tasks if given adequate contextual cues (Dubno, Ahlstrom, and Horwitz, 2000).

Impact of age-related hearing loss

Hearing impairment affects the quality of life for older individuals. The principle effects are related to the social and emotional impact of communication difficulties resulting from significant hearing loss (Mulrow et al.). Anecdotal reports of an association between dementia and hearing loss, or between depression and hearing loss, have not been replicated in well-controlled studies with large samples.

Older men and women adjust differently to their hearing loss. Women admit communication problems more often than men and assign greater importance to effective communication than men (Garstecki and Erler). This finding could be associated with differences in marital status between older men and women; older women are more likely to be widowed and thus rely to a greater extent on social interactions outside of the family. Men appear to adjust better to their hearing loss, as reflected by fewer reports of anger and stress associated with their hearing loss compared to reports of women. On the other hand, older men have a higher rate of denial of negative emotional reactions related to their hearing loss than women.

Remediation

Age-related sensorineural hearing loss cannot be ameliorated with medication or surgery. Rather, the principal form of treatment is amplification using a hearing aid. Analog and digital hearing aids are designed to amplify either all or selective frequencies based on an individual's hearing loss, with the goal of bringing all speech sounds into the range of audibility for the hearing-impaired listener. People with sensorineural hearing loss also experience a reduced tolerance for loud sounds. As a result, most hearing aids incorporate amplitude compression circuits to limit the output level of amplified sound without producing distortion. Hearing aids are quite effective for amplifying sound without producing discomfort. Thus, it is not surprising that older hearing-impaired people demonstrate significant benefit from hearing aids for understanding speech in quiet and noisy listening environments and for reducing their perceived hearing handicap (Humes, Halling, and Coughlin). However, there is wide individual variability in the magnitude of hearing aid benefit. The same amplification that hearing aids provide for a target speech signal is applied as well to noise, including the voices of people talking in a background. As a result, older hearing aid users often report less benefit from their hearing aids in noisy environments than in quiet environments. Only about 20 percent of older individuals with hearing loss purchase hearing aids. The prevailing reasons for lack of hearing aid use among older people are stigma, cost, and limited perceived benefit, particularly in noise. Another possible reason for hearing aid rejection by older people is that personal hearing aids do not overcome the older person's difficulties in understanding reverberant speech or rapid speech.

Frequency-modulated (FM) systems are amplification devices that can be beneficial for older listeners when they are located at a distance from a speaker. These can be independent systems or they can be components attached to a hearing aid and used as a selectable circuit. An FM system includes a microphone/transmitter placed near the speaker that broadcasts the sound, via FM transmission, to a receiver/amplifier located on the user. The amplified sound is unaffected by room acoustics, including noise and reverberation. This type of device is particularly helpful for older listeners in theaters, houses of worship, or classrooms, where a long distance between the speaker and the listener can aggravate the detrimental effects of poor room acoustics for older listeners with hearing loss.

Older individuals with bilateral, severe-to-profound hearing loss generally have widespread damage to the cochlea and derive minimal benefit from hearing aids and FM systems for recognizing speech. These individuals are potential candidates for a cochlear implant, a surgically implanted device that delivers speech signals directly to the auditory nerve via an electrode array inserted in the cochlea. Considerable success in receiving and understanding speech, with and without visual cues, has been reported for older cochlear implant recipients (Waltzman, Cohen, and Shapiro).

Regardless of the type of device used by the older hearing-impaired person, a successful remediation program includes auditory training, speechreading (lipreading) training, and counseling. The emphasis in these programs is training the older person to take full advantage of all available contextual cues, based on the consistent finding that older people are able to surmount most communication problems if contextual cues are available. Another principle of these programs is training effective use of nonverbal strategies (e.g., stage managing tactics for optimal viewing and listening) and verbal strategies (e.g., requesting the speaker to talk more slowly).

Prevention

Hearing sensitivity of older individuals in nonindustrialized societies is significantly better than that of older individuals in industrialized societies. This finding strongly suggests that there are preventable risk factors in industrialized societies for apparent age-related hearing loss. Exposure to intense noise and administration of ototoxic drugs are two well-known risk factors for acquired sensorineural hearing loss. The Baltimore Longitudinal Study of Aging has shown that elevated systolic blood pressure is associated with significant risk for hearing loss in men (Brant et al.). In the Beaver Dam epidemiological study, smoking was identified as a significant risk factor for sensorineural hearing loss among the 3,753 participants (Cruikshanks et al., 1998). Nonsmoking participants who lived with a smoker were also more likely to have a hearing loss than those not exposed to smoke in the home. The identification of these modifiable risk factors suggests that an effective program of prevention or delay of adult-onset hearing loss would include use of ear protection in noisy environments, control of hypertension, elimination of cigarette smoking, and monitoring the use of potentially ototoxic medications.

Sandra Gordon-Salant

See also Brain; Home Adaptation and Equipment; Vision and Perception.

BIBLIOGRAPHY

Brant, L. J.; Gordon-Salant, S.; Pearson, J. D.; Klein, L. L.; Morrell, C. H.; Metter, E. J.; and Fozard, J. L. "Risk Factors Related to Age-Associated Hearing Loss in the Speech Frequencies." Journal of the American Academy of Audiology 7, no. 3 (1996): 152160.

Cruickshanks, K. J.; Klein, R.; Klein, B. E.; Wiley, T. L.; Nondahl, D. M.; and Tweed, T. S. "Cigarette Smoking and Hearing Loss: The Epidemiology of Hearing Loss Study." Journal of the American Medical Association 279, no. 21 (1998): 17151719.

Dubno, J. R.; Ahlstrom, J. B.; and Horwitz, A. R. "Use of Context by Young and Aged Adults with Normal Hearing." Journal of the Acoustical Society of America 107, no. 1 (2000): 538546.

Dubno, J. R.; Dirks, D.D.; and Morgan, D. E. "Effects of Age and Mild Hearing Loss on Speech Recognition." Journal of the Acoustical Society of America 76, no. 1 (1984): 8796.

Fischel-Godshian, N. "Mitochondrial Deafness Mutations Reviews." Human Mutation 13, no. 4 (1999): 261270.

Fitzgibbons, P. J., and Gordon-Salant, S. "Age Effects on Measures of Auditory Duration Discrimination." Journal of Speech and Hearing Research 37, no. 3 (1994): 662670.

Fitzgibbons, P. J., and Gordon-Salant, S. "Auditory Temporal Order Perception in Younger and Older Adults." Journal of Speech, Language, and Hearing Research 41, no. 5 (1998): 10521060.

Garstecki, D., and Erler, S. F. "Older Adult Performance on the Communication Profile for the Hearing Impaired: Gender Difference." Journal of Speech, Language, and Hearing Research 42, no. 3 (1999): 735796.

Gates, G. A.; Couropmitree, N. N.; and Myers, R. H. "Genetic Associations in Age-Related Hearing Thresholds." Archives of OtolaryngologyHead and Neck Surgery 125, no. 6 (1999): 654659.

Gordon-Salant, S., and Fitgibbons, P. J. "Temporal Factors and Speech Recognition Performance in Young and Elderly Listeners." Journal of Speech and Hearing Research 36, no. 6 (1993): 12761285.

Humes, L. E. "Speech Understanding in the Elderly." Journal of the American Academy of Audiology 7, no. 3 (1996): 161167.

Humes, L. E.; Halling, D.; and Coughlin, M. "Reliability and Stability of Various Hearing-Aid Outcome Measures in a Group of Elderly Hearing-Aid Wearers." Journal of Speech, Language, and Hearing Research 39, no. 5 (1996): 923935.

Moscicki, E. K.; Elkins, E. F.; Baum, H. M.; and McNamara, P. M. "Hearing Loss in the Elderly: An Epidemiologic Study of the Framingham Heart Study Cohort." Ear and Hearing 6, no. 4 (1985): 184190.

Mulrow, C. D.; Aguilar, C.; Endicott, J. E.; Velex, R.; Tuley, M. R.; Charlip, W. S.; and Hill, J. A. "Association Between Hearing Impairment and the Quality of Life of Elderly Individuals." Journal of the American Geriatrics Society 38, no. 1 (1990): 4550.

National Center for Health Statistics. "Current Estimates from the National Health Interview Survey, 1995." Vital and Health Statistics 10 (1998): 7980.

Pearson, J. D.; Morrell, C. H.; Gordon-Salant, S.; Brant, L. J.; Metter, E. J.; Klein, L. L.; and Fozard, J. L. "Gender Differences in a Longitudinal Study of Age-Associated Hearing Loss." Journal of the Acoustical Society of America 97, no. 2 (1995): 11961205.

Robinson, D. W. "Threshold of Hearing as a Function of Age and Sex for the Typical Unscreened Population." British Journal of Audiology 22, no. 1 (1988): 520.

Schneider, B. A., and Hamstra, S. J. "Gap Detection Thresholds as a Function of Tonal Duration for Younger and Older Listeners." Journal of the Acoustical Society of America 106, no. 1 (1999): 371380.

Schuknecht, H. F. Pathology of the Ear, 2d ed. Philadelphia: Lea & Febiger, 1993.

Souza, P. E., and Turner, C. W. "Masking of Speech in Young and Elderly Listeners with Hearing Loss." Journal of Speech and Hearing Research 37, no. 3 (1994): 665661.

Stover, L., and Norton, S. J. "The Effects of Aging on Otoacoustic Emissions." Journal of the Acoustical Society of America 94, no. 5 (1993): 26702681.

Waltzman, S.; Cohen, N.; and Shapiro, B. "The Benefits of Cochlear Implantation in the Geriatric Population." OtolaryngologyHead and Neck Surgery 108, no. 4 (1993): 329333.

Wiley, T. L.; Cruickshanks, K. J.; Nondahl, D. M.; Tweed, T. S.; Klein, R.; and Klein, B. E. K. "Tympanometric Measures in Older Adults." Journal of the American Academy of Audiology 7, no. 4 (1996): 260268.

Wiley, T. L.; Cruickshanks, K. J.; Nondahl, D. M.; Tweed, T. S.; Klein, R.; and Klein, B. E. K. "Aging and High-Frequency Hearing Sensitivity." Journal of Speech, Language, and Hearing Research 41, no. 5 (1998): 10611072.

Willott, J. F. Aging and the Auditory System. San Diego: Singular Publishing Group, 1991.

Wingfield, A.; Poon, L. W.; Lombardi, L.; and Lowe, D. "Speed of Processing in Normal Aging: Effects of Speech Rate, Linguistic Structure, and Processing Time." Journal of Gerontology 40, no. 5 (1985): 579585.

Hearing

views updated May 23 2018

Hearing

Sound

Animal hearing

Human hearing

Resources

Hearing is the ability to collect, process, and interpret sound. Sound vibrations travel through air, water, or solids in the form of pressure waves. When a sound wave hits a flexible object such as the eardrum it causes it to vibrate, which begins the process of hearing. The process of hearing involves the conversion of acoustical energy(sound waves) to mechanical, hydraulic, chemical, and finally electrical energy where the signal reaches the brain and is interpreted.

Sound

The basis of sound is simple: There is a vibrating source, a medium in which sound travels, and a receiver. For humans the most important sounds are those that carry meaning, for example speech and environmental sounds. Sounds can be described in two ways, by their frequency (or pitch), and by their intensity (or loudness).

Frequency (the number of vibrations or sound waves per second) is measured in Hertz (Hz). A sound that is 4,000 Hz (like the sound the letter F makes) has 4,000 waves per second. Healthy young adults can hear frequencies between 20 and 20,000 Hz. However, the frequencies most important for understanding speech are between 200 and 8,000 Hz. As adults age, the ability to hear high frequency sounds decreases. An example of a high-frequency sound is a bird chirping, while a drum beating is a low-frequency sound.

Intensity (loudness) is the amount of energy of a vibration, and is measured in decibels (dB). A zero-decibel sound (like leaves rustling in the wind), can barely be heard by young healthy adults. In contrast, a 120 dB sound (like a jet engine at 20 feet [7 m]) is perceived as very loud and/or painful. Extremes in both loudness and/or pitch may seriously damage the human ear and should be avoided.

The difference between frequency (pitch) and intensity (loudness) can be illustrated using the piano as an analogy. The piano keyboard contains 88 keys that represent different frequencies (or notes). The low frequencies (bass notes) are on the left, the higher frequencies (treble notes) are on the right. Middle C on the keyboard represents approximately 256 Hz. The intensity or loudness of a note depends on how hard you hit the key. A light touch on middle C may produce a 30 dB, 256 Hz note, while a hard strike on middle C may produce a 55 dB, 256 Hz note. The frequency (or note) stays the same, but the intensity or loudness varies as the pressure on the key varies.

Animal hearing

The difference between hearing in humans and animals is often visible externally. For example some animals (e.g., birds) lack external ears/pinnas, but

maintain similar internal structures to the human ear. Although birds have no pinnas they have middle ears and inner ears similar to humans, and like humans, hear best at the frequencies around 2,000 to 4,000 Hz. All mammals (the animals most closely related to the human) have outer ears/pinnas. Many mammals have the ability to move the pinna to help with localization of sounds. Foxes, for example, have large bowl-shaped pinnas that can be moved to help locate faint or distant sounds. In addition to sound localization, some animals are able to manipulate their pinnas to regulate body temperature. Elephants do this by using their huge pinnas as fans and for heat exchange.

Human hearing

Human hearing involves a complicated process of energy conversion. This process begins with two ears at opposite sides of the human head. The ability to use two ears for hearing is called binaural hearing. The primary advantages to binaural hearing are the increased ability to localize sounds and the increased ease of listening in background noise. Sound waves from the world around us enter the ear and are processed and relayed to the brain. The actual process of sound transmission differs in each of the three parts of the human ear (the outer, middle and inner ears).

Outer ear and hearing

The pinna of the outer ear gathers sound waves from the environment and transmits them through the external auditory canal and eardrum to the middle ear. In the process of collecting sounds, the outer ear also modifies the sound. The external ear, or pinna, in combination with the head, can slightly amplify (increase) or attenuate (decrease) certain frequencies. This amplification or attenuation is due to individual differences in the dimensions and contours of the head and pinna.

A second source of sound modification is the external auditory canal. The tube-like canal is able to amplify specific frequencies in the 3,000 Hz region. An analogy would be an opened, half-filled soda bottle. When you blow into the bottle there is a sound, the frequency of which depends on the size of the bottle and the amount of space in the bottle. If you empty some of the fluid and blow into the bottle again the frequency of the sound will change. Since the size of the human ear canal is consistent the specific frequency it amplifies is also constant. Sound waves travel through the ear canal until they strike the tympanic membrane (the eardrum). Together, the head, pinna and external auditory canal amplify sounds in the 2,000 to 4,000 Hz range by 10-15 dB. This boost is needed since the process of transmitting sound from the outer ear to the middle ear requires added energy.

Middle ear and hearing

The tympanic membrane or eardrum separates the outer ear from the middle ear. It vibrates in response to pressure from sound waves traveling through the external auditory canal. The initial vibration causes the membrane to be displaced (pushed) inward by an amount equal to the intensity of the sound, so that loud sounds push the eardrum more than soft sounds. Once the eardrum is pushed inwards, the pressure within the middle ear causes the eardrum to be pulled outward, setting up a back-and-forth motion that begins the conversion and transmission of acoustical energy (sound waves) to mechanical energy (bone movement).

The small connected bones of the middle ear (the ossiclesmalleus, incus, and stapes) move as a unit, in a type of lever-like action. The first bone, the malleus, is attached to the tympanic membrane, and the back-and-forth motion of the tympanic membrane sets all three bones in motion. The final result of this bone movement is pressure of the footplate of the last (smallest) bone (the stapes), on the oval window. The oval window is one of two small membranes that allow communication between the middle ear and the inner ear. The lever-like action of the bones amplifies the mechanical energy from the eardrum to the oval window. The energy in the middle ear is also amplified due to the difference in surface size between the tympanic membrane and the oval window, which has been calculated at 14 to 1. The large head of a thumbtack collects and applies pressure and focuses it on the pin point, driving it into the surface. The eardrum is like the head of the thumbtack and the oval window is the pinpoint. The overall amplification in the middle ear is approximately 25 dB. The conversion from mechanical energy (bone movement) to hydraulic energy (fluid movement) requires added energy since sound does not travel easily through fluids. We know this from trying to hear under water.

Inner ear and hearing

The inner ear is the site where hydraulic energy (fluid movement) is converted to chemical energy (hair cell activity) and finally to electrical energy (nerve transmission). Once the signal is transmitted to the nerve, it will travel up to the brain to be interpreted.

The bone movements in the middle ear cause movement of the stapes footplate in the membrane of the oval window. This pressure causes fluid waves (hydraulic energy) throughout the entire two and a half turns of the cochlea. The design of the cochlea allows for very little fluid movement, therefore the pressure at the oval window is released by the interaction between the oval and round windows. When the oval window is pushed forward by the stapes footplate, the round window bulges outward and vice versa. This action permits the fluid wave motion in the cochlea. The cochlea is the fluid filled, snail shell-shaped coiled organ in the inner ear that contains the actual sense receptors for hearing.

The fluid motion causes a corresponding, but not equal, wave-like motion of the basilar membrane. Internally, the cochlea consists of three fluid filled chambers: the scola vestibuli, the scola tympani, and the scala media. The basilar membrane is located in the scala media portion of the cochlea, and separates the scala media from the scala tympani. The basilar membrane holds the key structure for hearing, the organ of Corti. The physical characteristics of the basilar membrane are important, as is its wave-like movement, from base (originating point) to apex (tip). The basilar wave motion builds slowly to a peak and then dies out quickly. The distance the wave takes to reach the peak depends on the speed at which the oval window is moved. For example, high-frequency sounds have short wavelengths, causing rapid movements of the oval window, and peak movements on the basilar membrane near the base of the cochlea. In contrast, low-frequency sounds

KEY TERMS

Amplify To increase.

Attenuate To decrease.

Bilateral Both sides of an object divided by a line or plane of symmetry.

Binaural Using two ears.

Decibel A unit of measurement of the intensity of sound, abbreviated dB.

Equilibrium Balance, the ability to maintain body position.

Frequency Pitch, the number of vibrations (or sound waves) per second.

Hertz A unit of measurement for frequency, abbreviated Hz. One hertz is one cycle per second.

Hydraulic Fluid in motion.

Intensity Loudness, the amount of energy of a vibration or sound wave.

Localization Ability to identify where a sound is coming from.

have long wavelengths, cause slower movements of the oval window, and peak movements of the basilar membrane near the apex. The place of the peak membrane movements corresponds to the frequency of the sound. Sounds can be mapped or located on the basilar membrane; high-frequency sounds are near the base, middle-frequency sounds are in the middle, and low-frequency sounds are near the apex. In addition to the location on the basilar membrane, the frequency of sounds can be identified based on the number of nerve impulses sent to the brain.

The organ of Corti lies upon the basilar membrane and contains three to five outer rows (12,000 to 15,000 hair cells) and one inner row (3,000) of hair cells. The influence of the inner and outer hair cells has been widely researched. The common view is that the numerous outer hair cells respond to low intensity sounds (quiet sounds, below 60 dB). The inner hair cells act as a booster, by responding to high intensity, louder sounds. When the basilar membrane moves, it causes the small hairs on the top of the hair cells (called stereo-cilia) to bend against the overhanging tectorial membrane. The bending of the hair cells causes chemical actions within the cell itself creating electrical impulses (action potentials) in the nerve fibers attached to the bottom of the hair cells. The nerve impulses travel up the nerve to the temporal lobe of the brain. The intensity of a sound can be identified based on the number of hair cells affected and the number of impulses sent to the brain. Loud sounds cause a large number of hair cells to be moved, and many nerve impulses to be transmitted to the brain.

Each of the separate nerve fibers join and travel to the lowest portion of the brain, the brainstem. Nerves from the vestibular part (balance part) of the inner ear combine with the cochlear nerves to form the VIII cranial nerve (auditory or vestibulocochlear nerve). Once the nerve impulses enter the brainstem, they follow an established pathway, known as the auditory pathway. The organization within the auditory pathway allows for a large amount of crossover. This means that the sound information (nerve impulses) from one ear do not travel exclusively to one side of the brain. Some nerve impulses cross over to the opposite side of the brain. The impulses travel on both sides (bilaterally) up the auditory pathway until they reach a specific point in the temporal lobe called Heschls gyrus. Crossovers act like a safety net. If one side of the auditory pathway is blocked or damaged, the impulses can still reach Heschls gyrus to be interpreted as sound.

See also Neuron.

Resources

BOOKS

Mango, Karin. Hearing Loss.New York: Franklin Watts, 1991.

Martin, Frederick. Introduction to Audiology. 6th ed. Boston:Allyn and Bacon, 1997.

Moller, Aage R. Sensory Systems: Anatomy and Physiology.

New York: Academic Press, 2002.

Rahn, Joan. Ears, Hearing and Balance. New York:Antheneum, 1984.

Simko, Carole. Wired for Sound. Washington, DC: Kendall Green Publications, 1986.

Sundstrom, Susan. Understanding Hearing Loss and What Can Be Done. Illinois: Interstate Publishers, 1983.

PERIODICALS

Mestel, Rosie. Pinna to the Fore. Discover 14 (June, 1993): 45-54.

OTHER

The Medical Consumers Advocate. A Primer on Ear Anatomy <http://www.doctorhoffman.com/earanat.htm>(accessed November 26, 2006).

University of Denver. Hearing and Sound <http://www.du.edu/~jcalvert/waves/ears.htm>(accessed November 26, 2006).

Kathryn Glynn

Hearing

views updated May 21 2018

Hearing

BIBLIOGRAPHY

Hearing is an especially important avenue by which we gain information about the world around us; one reason is that it plays a primary role in speech communication, a uniquely human activity. Clearly, then, our ability to perceive our environment and therefore to interact with it, both in a physical and verbal or abstract sense, is dependent in large measure upon our sense of hearing [seeLanguage, article onspeech pathology; perception, article onspeech perception.]

The study of the auditory system is carried on by a variety of disciplines, including psychology, physics, engineering, mathematics, anatomy, physiology, and chemistry. This article deals primarily with work in psychology, although it will be necessary to refer to other areas for a more complete understanding of certain phenomena. The peripheral hearing mechanism is reviewed from the standpoint of anatomy, hydromechanical action, and electrical activity. The basic subjective correlates of auditory stimuli are discussed, together with current research and theory. In all cases, we are concerned with the normal rather than pathological auditory system.

The peripheral hearing mechanism. When one speaks of the ear, the image that first comes to mind is the flap of cartilaginous tissue, or the pinna, fixed to either side of the head. The presumed function of the pinna is to direct sound energy into the ear canal, or external auditory meatus. In some animals, such as the cat, the pinna may be directionally oriented independently of the head. For all practical purposes, however, man does not possess this ability. It has been shown that because of the particular shape of man’s pinnae, sound arriving at the head is modified differentially depending on its direction of arrival. This may well provide a cue for the localization of a sound source in space.

The external meatus and the eardrum. The external meatus is a tortuous canal about one inch in length, closed at the inner end by the eardrum, or tympanic membrane. The meatus forms a passageway through which sound energy may be transmitted to the inner reaches of the ear. The meatus has the general shape of a tube closed at one end; it tends to resonate at a frequency of about 3,000 cycles per second. Because of this resonance the pressure of sound waves at the eardrum, for frequencies in this vicinity, is twenty times greater than that at the pinna. The meatus, therefore, serves as a selective amplification device, and, interestingly enough, it is primarily in this frequency range that our hearing is most sensitive.

The middle ear. The eardrum marks the boundary between the outer and middle ear. At this point variations in sound pressure are changed into mechanical motion, and it is a function of the middle ear to transmit this mechanical motion to the inner ear, where it may excite the auditory nerve. This transmission is affected by three small bones, the auditory ossicles, which form a bridge across the middle ear. The ossicles are named for their shapes: the malleus (hammer), which is attached to the eardrum; the incus (anvil), which is fixed to the malleus; and the stapes (stirrup), which articulates with the incus and at the same time fits into an oval opening of the inner ear. The ossicles not only provide simple transmission of vibratory energy but in doing so actually furnish a desirable increase in pressure. The ossicles are held in place by ligaments and may be acted upon by two small muscles, the tensor tympani and the stapedius. The function of these muscles is not clear, but it has been suggested that their contraction, together with changes in mode of ossicular motion, serves at high levels of stimulation to reduce the effective input to the inner ear.

The inner ear. The “foot plate” of the stapes marks the end of the middle ear and the beginning of the inner ear. The inner ear actually consists of two portions that, although anatomically related, serve essentially independent functions. Here we are concerned only with the cochlea, which contains the sensory end organs of hearing. The cochlea, spiral in shape, is encased in bone and contains three nearly parallel, fluid-filled ducts running longitudinally. The middle of the three ducts has as its bottom a rather stiff membrane known as the basilar membrane. On this membrane is the organ of Corti, within which are the hair cells, or the sensory receptors for hearing. The hairs of the hair cells extend a short distance up into a gelatinous plate known as the tectorial membrane. When the basilar membrane is displaced transversely, the hairs are moved to the side in a shearing motion and the hair cells are stimulated.

Displacement of the basilar membrane is brought about by fluid movement induced by the pistonlike action of the stapes in the oval window. Since fluid is essentially noncompressible, its displacement is possible because of the existence of a second opening between the cochlea and the middle ear, the round window. When the stapes moves inward, a bulge is produced in the basilar membrane and the round window membrane moves outward. The bulge or local displacement of the baslar membrane is not stationary but moves down the membrane away from the windows. If the movement of the stapes is periodic, such as in response to a pure tone, then the basilar membrane is displaced alternately up and down. Thus, when a pure tone is presented to the ear, a wave travels continuously down the basilar membrane. The amplitude of this wave is not uniform but achieves a maximum value at a particular point along the membrane determined by the stimulus frequency. High frequencies yield maxima near the stapes; lower frequencies produce maxima progressively farther down the membrane.

Electrical potentials. Many of the mechanical events just described have an electrical counter part. The cochlear micro-phonic, an electrical potential derived from the cochlea, reflects the displacement of the basilar membrane. The endocochlear potential represents static direct current voltages within various portions of the cochlea, whereas the summating potential is a slowly changing direct current voltage that occurs in response to a high-intensity stimulus. Also observable is the action potential, which is generated by the auditory neurons in the vicinity of the cochlea. Neural potentials reflecting the activity of the hair cells are transmitted by the eighth cranial nerve to the central nervous system.

Psychoacoustics. Although it is true that one of the principal functions of man’s auditory system is the perception of speech, it does not necessarily follow that the exclusive use of speech stimuli is the best way to gain knowledge of our sense of hearing. In the study of hearing, the use of simple stimuli predominates and the common stimulus is the sine wave, or pure tone.

Traditionally, psychophysics has investigated problems in (1) detection of stimuli, (2) detection of differences between stimuli, and (3) relations among stimuli. Psychoacoustics has followed a similar pattern.

Threshold effects. It has been shown that a number of factors are influential in determining the minimum magnitude (often called intensity or amplitude) of an auditory signal that can be detected. Specifically, absolute thresholds are a function primarily of the frequency and duration of the signal. Under optimum conditions, sound magnitudes so faint as to approach the random movement of air molecules, known as Brownian movement, may be heard; the displacement of the basilar membrane in these cases is a thousand times less than the diameter of a hydrogen atom. Masked thresholds, those obtained in the simultaneous presence of a signal and other stimuli that mask its effect, depend on the frequency and relative magnitude of each stimulus.

In addition, previous auditory stimulation will affect subsequent absolute thresholds. Generally the effect is to lower auditory sensitivity, although in some instances sensitivity may be enhanced. Pertinent factors here include the magnitude, frequency, and duration of the “fatiguing” stimulus as well as the interval between the presentation of the fatiguing and test stimuli.

Differential thresholds. There are as many studies dealing with the detection of differences between two stimuli as there are ways in which stimuli may be varied. Only a few examples, therefore, will be cited here. With pure tones thresholds for hearing frequency differences become greater as frequency is increased, and smaller as magnitude is increased. Differences as small as one part in a thousand are detectable. Differential thresholds for magnitude depend upon the same parameters, but in a more complex way.

Noise stimuli, those sound waves lacking a periodic structure, may be varied with respect to magnitude, bandwidth, and center frequency, but differential thresholds with noise are generally predictable from the pure tone data.

Signal detection theory. Recently, there has come into psychophysics, principally by way of psychoacoustics, a new way of thinking about detection data. This new approach makes use of signal detection theory and offers some novel ideas. First, it offers a way to measure sensory aspects of detection independently of decision processes. That is, under ordinary circumstances, the overt response to the signal depends not only upon the functioning of the receptor but on the utility of the response alternatives. If it is extremely important that a signal be detected, under ordinary circumstances a subject is more likely to give a positive response regardless of the activity of the receptor.

Second, it rejects the notion of a threshold; that is, a threshold in the sense that a mechanism exists which is triggered if some critical stimulus level is exceeded. One basis for such rejection is clear from the previous paragraph.

The theory of signal detectability substitutes for the concept of a threshold, the view that detection of a stimulus is a problem in the testing of statistical hypotheses. For example, the testing situation can be so structured that two stimulus conditions exist: the signal is present and the signal is absent. Clearly, there are four alternatives: (1) the listener can accept the hypothesis that the signal was present when, in fact, it was; (2) he can reject this hypothesis under the same conditions; (3) he can accept the hypothesis that the signal was absent when, in fact, it was; (4) he can reject this hypothesis. By making certain assumptions about the characteristics of the stimulus and proceeding under the ideal condition that the observer makes use of all information in the signal, the probabilities associated with these alternatives may be mapped out. It has been shown that an actual observer behaves as if he were performing in this fashion, and his performance may therefore be compared to that ideally obtainable.

Suprathreshold phenomena. With signals that are easily audible, it is generally conceded that there are three primary perceptual dimensions to hearing: pitch, loudness, and quality. Considerable effort has been expended in searching for the stimulus correlates and physiological mechanisms associated with these dimensions.

Pitch and theories of hearing. With a simple pure tone, pitch is usually associated with the frequency of vibration of the stimulus. High frequencies tend to give rise to high pitches. His torically, there have been two general types of theories of hearing: “place” theories and “volley” theories (often called frequency theories).

The most commonly suggested mechanism of pitch is based on a place hypothesis, which holds that the pertinent cue for pitch is the particular locus of activity within the nervous system. It seems likely that stimulation of specific neurons or groups of neurons is related to the displacement patterns of the basilar membrane. The chief alternative to the place hypothesis is the volley or rate of neural discharge hypothesis, which holds that the rate or frequency with which neural discharge occurs within the auditory nerve is the determinant of pitch; the higher the frequency, the higher the pitch. The frequency of neural discharge, in turn, is synchronous with the stimulus frequency.

Any results in which pitch is influenced by any parameter other than frequency is not in accord with the neural discharge hypothesis. Such results include changes in pitch brought about by differences in the magnitude of the stimulus, by masking, fatigue, or auditory pathology. On the other hand, the place hypothesis cannot readily explain how a pitch corresponding to a particular frequency is perceived, when, in fact, little or no energy exists in the stimulus at that frequency. Such a situation exists for several pitch phenomena: the residue, periodicity pitch, time separation pitch, and Huggins’ effect.

Loudness. Loudness is related to the magnitude of the stimulus, but not exclusively so. Frequency and duration of the stimulus are secondary factors in determining loudness. The loudness of a stimulus depends upon prior acoustic stimulation in somewhat the same manner that absolute threshold does. Generally, loudness decreases following adaptation, and the pertinent parameters are the same as those that influence threshold shifts.

Quality. Quality, or timbre, is a complex perceptual quantity that appears to be associated with the harmonic structure of the sound wave. The greater the number of audible harmonics, the richer or fuller the sound will appear. The converse also appears to be true. Relatively little work has been done in this area.

Scaling and harmonics. Psychophysical scaling, or the assessment of the relation between the mag nitude of the stimulus and the magnitude of the sensation, has been of interest for many years. New methods, whose chief virtues are simplicity and relative freedom from bias, have recently stimulated additional research. Auditory dimensions that have been studied include loudness, pitch, duration, volume, density, and noxiousness.

The principal finding is that in nearly all cases the relation between stimulus and sensation is a power function.

Nonlinear effects. In simple systems in which response magnitude is a nonlinear function of stimulus input, harmonics are generated when the system input is in the form of a simple sinusoid. When the input is two sinusoids, or pure tones, then in addition to harmonics, components exist whose frequency is equal to the sum and the difference of the input frequencies. Such effects are seen when the auditory system is driven at moderate and high intensities. That is, additional tones called aural harmonics and sum and difference tones are perceived corresponding to the predicted frequencies. This seems to indicate that the auditory system behaves in a nonlinear fashion over the upper part of its dynamic range.

Binaural hearing. Under most conditions, stimuli arising from a single sound source are represented somewhat differently at each of the two ears. The auditory system makes use of these subtle differences in such a fashion that we are able to localize the sound in space. The binaural system is especially sensitive to small temporal disparities in the two inputs, being capable of discriminating differences as small as 0.000008 second. Intensity differences at the two ears also play a role in localization.

Although localization effects are the most dramatic event in binaural hearing, other interesting binaural phenomena occur. For example, less energy is required for threshold if both ears, rather than just one ear, are stimulated. Similarly, the same loudness may be achieved binaurally with less energy than it could be monaurally.

Our sense of hearing provides us with information relative to vibratory or acoustic events. This information relates to the magnitude, frequency, duration, complexity, and spatial locus of the event. The peripheral auditory system is an elegantly designed hydromechanical structure. The sensory cells themselves and complexities of their enervation are of considerable importance, but are less well understood. In total, hearing is an extremely versatile sensory process with exquisite sensitivity.

Arnold M. Small, Jr.

[Other relevant material may be found inAttention; Nervous system; Psychophysics; Senses; and in the biography ofHelmholtz.]

BIBLIOGRAPHY

Conference on the Neural Mechanisms of the Auditory and Vestibular Systems, Bethesda, Md., 1959 1960 Neural Mechanisms of the Auditory and Vestibular Systems. Springfield, 111.: Thomas. → The first 16 chapters deal with the auditory system.

Geldard, Frank A. 1953 The Human Senses. New York: Wiley.

Harvard University, Psycho-acoustic Laboratory 1955 Bibliography on Hearing. Cambridge, Mass.: Harvard Univ. Press.→ Contains more than ten thousand titles.

Helmholtz, Hermann L. F. von (1862) 1954 On the Sensations of Tone as a Physiological Basis for the Theory of Music. New York: Dover. → First published as Die Lehre von den Tonempfindungen als physio-logische Grundlage fur die Theorie der Musik. A classic which in many ways is as important today as when it was written.

Jerger, James (editor) 1963 Modern Developments in Audiology. New York: Academic Press. → Especially valuable for its readable review of signal detection theory and its coverage of the effect of acoustic stimulation on subsequent auditory perception.

Von Bekesy, Georg (1928–1958)1960 Experiments in Hearing. New York: McGraw-Hill. → A compilation of Georg Von Bekesy’s writings on cochlear mechanics, psychoacoustics, and the ear’s conductive processes.

Wever, Ernest G. 1949 Theory of Hearing. New York: Wiley. → A review of theories of hearing with a spe cial attempt to show the cogency of a volley theory.

Wever, Ernest G.; and lawrence, merle 1954 Physiological Acoustics. Princeton Univ. Press. → Emphasis on the mechanics of the middle ear.

hearing

views updated May 29 2018

hearing Sounds are rapid variations in pressure, which are propagated through the air away from a vibrating object, such as a loudspeaker cone or the human vocal cords. Our sense of hearing allows us to detect and identify the myriad sounds present in our environment, and to determine their whereabouts. In humans and other animals with a poorly-developed sense of smell, hearing plays a particularly important role in alerting the listener to novel events in the environment. Through speech and music, human hearing also makes an extremely important contribution to social communication.

When the prongs of a tuning fork vibrate back and forth in a regular manner, a periodic sound is produced. For such a pure tone, the simplest type of sound, the pressure increases and then decreases following a smooth wave pattern (a sinusoidal function). The number of complete cycles per second is known as the frequency of the tone and is measured in Hertz (Hz). More commonly, natural sounds contain a number of different frequency components, the variation in intensity across the frequency range being referred to as the spectrum of the sound. The fundamental frequency of a complex tone corresponds to its perceived pitch, whereas the full spectrum determines the timbre, or sound quality. Thus, the same note played on two different musical instruments may sound different, as a result of differences in the additional frequencies in their spectra.

Young, healthy humans can hear sound frequencies from about 40 Hz to 20 kHz, although the upper frequency limit declines with age. Other mammals can hear frequencies that are inaudible to humans, both lower and higher. Some bats, for example, which navigate by echolocation, both emit and hear sounds with frequencies of more than 100 kHz. In general, there is a good match between the sound frequencies to which an animal is most sensitive and those frequencies it uses for communication. This is true in humans, who are most sensitive over a broad range of tones that cover the spectrum of human speech.

Compared with total atmospheric pressure, airborne sound waves represent extremely small pressure changes. The amplitude of the pressure variation in a sound directly determines its perceived loudness. Because the range of sound pressures that can be heard is so large, a logarithmic scale of decibels (dB) is used to measure sound intensity. On this scale, 0 dB is around the lowest sound level that can be heard by a human listener, whereas sound levels of 100 dB or more are uncomfortably loud and may damage the ears. At pop concerts and in discos the sound level can be much higher than this!

The design of the ear changed substantially between aquatic and terrestrial vertebrates, but has remained very similar among mammals (except for specializations for different parts of the frequency spectrum). The human ear is illustrated in the figure. It is subdivided into the external, middle, and inner ear. The visible part of the ear comprises the skin-covered cartilaginous external ear. This includes the pinna on the side of the head and the external auditory meatus, or ear canal, which terminates at the eardrum. As they travel into the ear canal, sounds are filtered so that the amplitude of different frequency components is altered in different ways depending on the location of the sound source. These spectral modifications, which are not perceived as a change in sound quality, help us to localize the source of the sound. They are particularly important for distinguishing between sounds located in front of and behind or above and below the listener, and for localizing sounds if you are deaf in one ear, or when listening to very quiet sounds, inaudible to one ear. Because of its resonance characteristics, the external ear also amplifies the sound pressure at the eardrum by up to 20 dB in humans over a frequency range of 2–7 kHz.

Lying behind the eardrum is an air-filled cavity known as the middle ear, which is connected to the back of the throat via the eustachian tube. Opening of this tube during swallowing and yawning serves to maintain the middle ear cavity at atmospheric pressure. Airborne sounds pass through the middle ear to reach the fluid-filled cochlea of the inner ear, where the process of transduction — the conversion of sound into the electrical signals of nerve cells — takes place. Because of its greater density, the fluid in the cochlea has a much higher resistance to sound vibration than the air in the middle ear cavity. To avoid most of the incoming sound energy from being reflected back, vibrations of the eardrum are mechanically coupled to a flexible membrane (the oval window) in the wall of the cochlea by the three smallest bones in the body (the malleus, incus, and stapes — together known as the ossicles). These delicately suspended bones improve the efficiency with which sound energy is transferred from the air to the fluid in the cochlea and therefore prevent the loss in sound pressure that would otherwise occur due to the higher impedance of the cochlear fluids. This is achieved primarily because the vibrations of the eardrum are concentrated on the much smaller footplate of the stapes, which fits into the oval window of the cochlea. The smallest skeletal muscles in the body are attached to the ossicles, and contract reflexly in response to loud sounds or when the owner of them speaks. These contractions dampen the vibrations of the ossicles, thereby reducing the transmission of sound through the middle ear. As with the external ear, the efficiency of middle ear transmission varies with sound frequency. Together, these structures determine the frequencies to which we are most sensitive.

The inner ear includes the cochlea, the hearing organ, and the semicircular canals and otolith organs, the sense organs of balance. Both systems employ specialized receptor cells, known as hair cells, for detecting mechanical changes within the fluid-filled inner ear. Projecting from the apical surface of each hair cell is a bundle of around 100 hairs called stereocilia. Deflection of the bundle of hairs by sound (in the cochlea) or head motion or gravity (in the balance organs) leads to the opening of pores in the membrane of the hairs that allow small, positively-charged ions to rush into the hair cell and change its internal voltage. This causes a neurotransmitter to be released from the base of the hair cell, which, in turn, activates the ends of nerve fibres that convey information from the ear towards the brain. Although there are some differences between the hair cells of the hearing and balance organs, they work in essentially the same way.

The mammalian cochlea is a tube which is coiled so that it fits compactly within the temporal bone. The length of the cochlea — just over 3 cm in humans — is related to the range of audible frequencies rather than the size of the animal. Consequently, this structure does not vary much in size between mice and elephants. It is subdivided lengthwise into two principal regions by a collagen-fibre meshwork known as the basilar membrane. Around 15 000 hair cells, together with the nerves that supply them and supporting cells, are distributed in rows along its length. Vibrations transmitted by the middle ear ossicles to the oval window produce pressure gradients between the cochlear fluids on either side of the basilar membrane, setting the membrane into motion. The hair cells are ideally positioned to detect very small movements of the basilar membrane. There are two types of hair cells in the cochlea. The inner hair cells form a single row, whereas the more numerous outer hair cells are typically arranged into three rows.

In the nineteenth century, the great German physiologist and physicist Hermann von Helmholtz proposed that our perception of pitch arises because each region of the cochlea resonates at a different frequency (rather like the different strings of a piano). The first direct measurements of the response of the cochlea to sound were made by Georg von Békésy a century later, on the ears of human cadavers. He showed that very loud sounds induced a travelling wave of displacement along the basilar membrane, which resembles the motion produced when a rope is whipped. Von Békésy observed that the wave built up in amplitude as it travelled along the membrane and then decreased abruptly. For high-frequency sounds, the peak amplitude of the wave occurs near the base of the cochlea (adjacent to the middle ear), whereas the position of the peak shifts towards the other end of the tube (the apex) for progressively lower frequencies. This indeed occurs because the basilar membrane increases in width and decreases in stiffness from base to apex. These observations, which led to von Békésy winning the Nobel Prize, established that the cochlea performs a crude form of Fourier analysis, splitting complex sounds into their different frequency components along the length of the basilar membrane.

More recently, much more sensitive techniques, which can measure vibrations of less than a billionth of a metre, have revealed that motion of the basilar membrane is dramatically different in living and dead preparations. In animals in which the cochlea is physiologically intact, the movements of the basilar membrane are amplified, giving rise to much greater sensitivity and sharper frequency ‘tuning’ than can be explained by the variation in width and stiffness along its length. This amplifying step most likely involves the living outer hair cells, which, when stimulated by sound, actively change their length, shortening and lengthening up to thousands of times per second. These tiny movements appear to feed energy back into the cochlea to alter the mechanical response of the basilar membrane. Damage to the outer hair cells, following exposure to loud sounds or ‘ototoxic’ drugs, leads to poorer frequency selectivity and raised thresholds of hearing. The active responses of the outer hair cells are probably responsible for the extraordinary fact that the ear itself produces sound, which can be recorded with a microphone placed close to the ear and used to provide an objective measure of the performance of the ear.

Vibrations of the basilar membrane, detected by the inner hair cells, are transmitted to the brain in the form of trains of nerve impulses passing along the 30 000 axons of the auditory nerve (which mostly make contact with the inner hair cells). Each nerve fibre responds to motion of a limited portion of the basilar membrane and is therefore tuned to a particular sound frequency. Consequently, the frequency content of a sound is represented within the nerve and the auditory regions of the brain by which fibres are active. For frequencies below about 5 kHz, the auditory nerve fibres act like microphones, in that the impulses tend to be synchronized to a particular phase of the cycle of the stimulus. This property, known as phase-locking, allows changes in sound frequency to be represented to the brain by differences in the timing of action potentials and is thought to be particularly important for pitch perception at low frequencies and for speech perception. The intensity of sound is represented within the auditory system by the rate of firing of individual neurons — the number of nerve impulses generated per second — and by the number of neurons that are active.

Auditory signals are relayed through various nuclei (collections of nerve cell bodies) in the brain stem and thalamus, up to the temporal lobe of the cerebral cortex. At each nucleus, the incoming fibres that relay information to the next group of nerve cells are distributed in a topographic order, preserving the spatial relationships of the regions of basilar membrane from which they receive information. This spatial ordering of nerve fibres establishes a neural ‘map’ of sound frequency in each nucleus. The extraction of biologically important information — ‘What is the sound? Where did it come from?’ — takes place in the brain. As a result of the complex pattern of connections that exist within the auditory pathways, many neurons, particularly in the cortex, respond better to complex sounds than to pure tones. Indeed, in certain animals, including songbirds and echolocating bats, physiologists have discovered neurons that are tuned to behaviourally important acoustical features (components of bird song or bat calls). But auditory processing reaches its zenith in humans, where different regions of the cerebral cortex appear (according to studies involving imaging techniques) to have specialized roles in the perception of language and music.

The ability to localize sounds in space assumes great importance for animals seeking prey or trying to avoid potential predators, and also when directing attention towards interesting events. Although sounds can be localized using one ear alone, an improvement in performance is usually seen if both ears hear the sound. Such binaural localization depends on the detection of tiny differences in the intensity or timing of sounds reaching the two ears. At the beginning of the twentieth century, Lord Rayleigh demonstrated that human listeners can localize sounds below about 1500 Hz using the minute differences between the time of arrival (or phase) of the sound at the two ears, which arise because the sound arrives slightly later at the ear further from the sound source. He also showed that interaural intensity differences, which result from the acoustical ‘shadow’ cast by the head, are effective cues at higher frequencies. Using these cues, listeners can distinguish two sources separated by as little as 1° in angle in the horizontal plane.

Studies in animals have shown that neurons in auditory nuclei of the brain stem receive converging signals from the two ears. By comparing the timing of the phase-locked nerve impulses coming from each side, some of these neurons show sensitivity to differences in the sound arrival time at the two ears of the order of tens of microseconds, whereas other neurons are exquisitely sensitive to interaural differences in sound level. As well as facilitating the localization of sound sources, binaural hearing improves our ability to pick out particular sound sources, which helps us to detect and analyze them, particularly against a noisy background (aptly termed the ‘cocktail party effect’).

Andrew J. King

Bibliography

Moore, B. C. J. (1997). An introduction to the psychology of hearing, (4th edn). Academic Press, London.
Pickles, J. O. (1988). An introduction to the physiology of hearing, (2nd edn). Academic Press, London.


See also deafness; ear, external; eustachian tube; hearing aid; sense organs; sensory integration; tinnitus.

Hearing

views updated Jun 27 2018

Hearing

Definition

Hearing is the ability of the human ear to collect, process, and interpret sound.

Description

Sound vibrations travel through air, water, or solids in the form of pressure waves. When a sound wave hits a flexible object, such as the eardrum, it causes it to vibrate, which begins the process of hearing. The process of hearing involves the conversion of acoustical energy (sound waves) to mechanical, hydraulic, chemical, and finally, electrical energy where the signal reaches the brain and is interpreted.

The basis of sound is simple: there is a vibrating source, a medium in which sound travels, and a receiver. For humans the most important sounds are those which carry meaning, for example, speech and environmental sounds. Sounds can be described in two ways, by their frequency (pitch) or by their intensity (loudness).

Frequency (the number of vibrations or sound waves per second) is measured in Hertz (Hz). A sound that is 4,000 Hz (like the sound the letter 'F' makes) has 4,000 waves per second. Healthy young adults can hear frequencies between 20 and 20,000 Hz. However, the frequencies most important for understanding speech are between 2,000 and 8,000 Hz. As adults age, the ability to hear frequency sounds decreases. An example of a high frequency sound is a bird chirping, while a drum beating is a low frequency sound.

Intensity (loudness) is the amount of energy of a vibration, and is measured in decibels (dB). A zero decibel sound (like leaves rustling in the wind), can barely be heard by young healthy adults. In contrast, a 120 dB sound (like a jet engine at 20 ft [7 m]) is perceived as very loud and/or painful. Extremes in both loudness or pitch may seriously damage the human ear and should be avoided.

The difference between frequency or pitch and intensity or loudness can be illustrated using the piano as an analogy. The piano keyboard contains 88 keys that represent different frequencies or notes. The low frequencies or bass notes are on the left, the higher frequencies or treble notes are on the right. Middle C on the keyboard represents approximately 256 Hz. The intensity or loudness of a note depends on how hard you hit the key. A light touch on middle C may produce a 30 dB, 256 Hz note, while a hard strike on middle C may produce a 55 dB, 256 Hz note. The frequency, or note, stays the same, but the intensity of loudness varies as the pressure on the key varies.

Function

Human hearing involves a complicated process of energy conversion. This process begins with two ears located at opposite sides of the human head. The ability to use two ears for hearing is called binaural hearing. The primary advantages of binaural hearing are the increased ability to localize sounds and the increased ease of listening to a particular sound while having other noises in the background. Sound waves from the world around us enter the ear and are processed and relayed to the brain.

The actual process of sound transmission differs in each of the three parts of the human ear. The three parts of the human ear are the outer ear, middle ear, and inner ear.

Role in human health

The outer ear plays an important role in hearing. The pinna of the outer ear gathers sound waves from the environment and transmits them through the external auditory canal and eardrum to the middle ear. In the process of collecting sounds, the outer ear also modifies the sound. The external ear, or pinna, in combination with the head, can slightly amplify or increase as well as attenuate or decrease certain frequencies. The amplification or attenuation is due to individual differences in the dimensions and contours of the head and pinna.

The external auditory canal can also modify sound. This tube-like canal is able to amplify specific frequencies in the 3,000 Hz region. An analogy would be an opened, half-filled soda bottle. If you empty some of the fluid and blow into the bottle again, the frequency of the sound will change. Since the size of the human ear canal is consistent, the specific frequency it amplifies is also constant. Sound waves travel through the ear canal until they reach the tympanic membrane or eardrum. Together, the head, pinna, and external auditory canal amplify sounds in the 2,000-4,000 Hz range by 10-15 dB. This boost is needed since the process of transmitting sound from the outer to the middle ear requires added energy.

The middle ear is separated from the outer ear by the tympanic membrane or eardrum. The membrane vibrates in response to pressure from sound waves traveling through the external auditory canal. The initial vibration causes the membrane to be displaced (pushed) inward by an amount equal to the intensity of the sound, so that loud sounds push the eardrum more than soft sounds. Once the eardrum is pushed inwards, the pressure within the middle ear causes the eardrum to be pulled outward, setting up a back-and-forth motion that begins the conversion and transmission of acoustical energy (sound waves) to mechanical energy (bone movement).

The three small, connected bones of the middle ear, together called the ossicle, are: the hammer or malleus, the anvil or incus, and the stapes or stirrup. The tiny, interconnected bones move as a unit in a type of lever-like action. The first bone, the malleus, is attached to the tympanic membrane and the backand-forth motion of the tympanic membrane sets all three bones in motion. The final result of this bone movement is pressure of the foot plate of the last and smallest bone, the stapes, on the oval window. The window is one of two small membranes that allow communication between the middle ear and the inner ear. The lever-like action of the bones amplifies the mechanical energy from the eardrum to the oval window. The energy in the middle ear is also amplified due to the difference in surface size between the tympanic membrane and the oval window, which has been calculated to be a difference of about 14 to one.

The relationship of the eardrum or tympanic membrane to the oval window can be compared to that of a thumbtack. The eardrum would be the head of the thumbtack and the oval window would be the pin point of the thumbtack. The eardrum or the head of the tack would collect and apply pressure and then focus it on the oval window or the pin point, driving it into the surface. The overall amplification in the middle ear is approximately 25 dB. The conversion from mechanical energy or bone movement to hydraulic energy or fluid movement requires added energy since sound does not travel easily through fluids.

The inner ear is the site where hydraulic energy or fluid movement is converted first to chemical energy or hair cell activity and finally to electrical energy or nerve transmission. Once the signal is transmitted to the nerve, it will travel up to the brain to be interpreted.

The bone movements in the middle ear cause movement of the stapes foot plate in the membrane of the oval window. This pressure causes fluid waves or hydraulic energy throughout the entire two-and-a-half turns of the cochlea. The design of the cochlea allows for very little fluid movement, therefore the pressure at the oval window is released by the interaction between the oval and round windows. When the oval window is pushed forward by the stapes foot plate, the round window bulges outward and vice versa. This action permits the fluid wave motion in the cochlea. The cochlea is the fluid-filled, snailshell-shaped, coiled organ in the inner ear that contains the actual sense receptors for hearing. The fluid motion causes a corresponding, but not equal, wave-like motion of the basilar membrane. Internally, the cochlea consists of three fluid filled chambers: the scala vestibuli, the scala timpani, and the scala media. The basilar membrane is located in the scala media portion of the cochlea, and separates the scala media from the scala timpani. The basilar membrane holds the key structure for hearing, the organ of Corti.

The physical characteristics of the basilar membrane are important, as is its wave-like movement, from its base or originating point to its apex or tip. The basilar wave motion slowly builds to a peak and then quickly dies out. The distance the wave takes to reach the peak depends on the speed at which the oval window is moved. For example, high frequency sounds have short wavelengths, causing rapid movements of the oval window, and peak movements on the basilar membrane near the base of the cochlea. In contrast, low frequency sounds have long wavelengths and cause slower movements of the oval window, and peak movements of the basilar membrane near the apex. The place of the peak membrane movements corresponds to the frequency of the sounds. Sounds can located or "mapped" on the basilar membrane. High frequency sounds are near the base, middle frequency sounds are in the middle, and low frequency sounds are near the apex. In addition to the location on the basilar membrane, the frequency of sounds can be identified based on the number of nerve impulses sent to the brain.

The organ of Corti lies upon the basilar membrane and contains three to five outer rows (12,000 to 15,000) of hair cells and one inner row (3,000) of hair cells. The influence of the inner and outer hair cells has been widely researched. The common view is that the numerous outer hair cells respond to low intensity sounds below 60 dB. The inner hair cells act as a booster, by responding to high intensity, louder sounds. When the basilar membrane moves, it causes the small hairs on the top of the hair cells or stereo cilia to bend against the overhanging tectorial membrane. The bending of the hair cells causes chemical actions within the cell itself creating electrical impulses in the nerve fibers attached to the bottom of the hair cells. The nerve impulses travel up the nerve to the temporal lobe of the brain. The intensity of a sound can be identified based on the number of hair cells affected and number of impulses sent to the brain. Loud sounds cause a large number of hair cells to be moved, and many nerve impulses to be transmitted to the brain.

Each of the separate nerve fibers join and travel to the lowest portion of the brain, the brain stem. Nerves from the vestibular, or balance, part of the inner ear combine with the cochlear nerves to form the VIII cranial nerve (auditory or vestibulocochlear nerve). Once the nerve impulses enter the brain stem, they follow an established pathway, known as the auditory pathway. The organization within the auditory pathway allows for a large amount of crossover. "Crossover" means that the sound information or nerve impulses from one ear do not travel exclusively to one side of the brain. Some of the nerve impulses cross over to the opposite side of the brain. The impulses travel bilaterally on both sides of the brain up the auditory pathway until they reach a specific point in the temporal lobe called Heschl's gyrus. Crossovers act like a safety net. If one side of the auditory pathway is blocked or damaged, the impulses can still reach Heschl's gyrus to be interpreted as sound.

Common diseases and disorders

There are several common diseases, disorders, and conditions that occur in the external ear, middle ear, eardrum, and inner ear that can affect the sense of hearing in humans.

  • External otitis (swimmer's ear), an inflammation or infection of the external ear.
  • Furunculosis or recurring boils in the ear canal.
  • Exostoses or benign tumors of the ear canal.
  • Foreign bodies or anything that gets stuck in the ear. These can range from insects or seeds to earplugs that cannot be removed.
  • Trauma to the eardrum.
  • Bullous myringitis, an inflammation of the eardrum.
  • Retracted eardrum or blocked eustachian tube.
  • Barotitis media or eardrum retracted by change of atmospheric pressure while the eustachian tube is blocked.
  • Otitis media, a middle ear infection.
  • Acute mastoiditis, a severely infected mastoid process.
  • Otosclerosis or ear bone degeneration.
  • Cholesteatoma or epithelial inclusion cyst.
  • Meniere's disease or vertigo.
  • Acoustic neurinoma, a tumor on the vestibular nerve.
  • Presbycusis or sensorineural hearing loss due to aging.
  • Labyrinthitis, an infection of the inner ear.
  • Vestibular neuronitis, a sudden loss of the balance mechanism in one ear.
  • Tinnitus or the sensation of sound in the ear when there is no sound.

KEY TERMS

Amplify— To increase.

Attenuate— To decrease.

Bilateral— Both sides of an object.

Binaural— Both ears.

Decibel— Unit of measurement of the intensity of sound, abbreviated dB.

Equilibrium— Balance, the ability to maintain body position.

Frequency— Pitch, the number of vibrations or sound waves per second.

Hertz— Unit of measurement for frequency, abbreviated Hz.

Hydraulic— Fluid in motion.

Intensity— Loudness, the amount of energy of a vibration or sound wave.

Localization— Ability to identify where a sound is coming from.

Resources

BOOKS

Burton, Martin, ed. Hall and Colman's Diseases of the Ear, Nose, and Throat. 15th ed. St. Louis, MO: Churchill Livingstone, 2000.

Clark, John Greer, and Frederick Martin. Introduction to Audiology. 7th ed. Boston: Allyn and Bacon, 1999.

Shin, Linda M., Linda M. Ross, and Karen Bellenir. Ear, Nose and Throat Disorders Sourcebook. Detroit: Omnigraphics, 1998.

Turkington, Carol, and Allan E. Sussman. Living With Hearing Loss: The Sourcebook for Deafness and Hearing Disorders. New York: Checkmark Books, 2000.

PERIODICALS

Mestel, Rosie. "Pinna to the Fore." Discover 14 (June 1993).

ORGANIZATIONS

American Speech-Language-Hearing Association. 10801 Rockville Pike, Rockville, MD 20852. (800) 638-8255. 〈http://www.asha.org〉.

National Institute on Deafness and Other Communication Disorders. National Institutes of Health. 31 Center Dr., MSC 2320, Bethesda, MD 20892-2320. 〈http://www.nidcd.nih.gov〉.

OTHER

Diseases of the Ear, Nose and Throat. 〈http://cpmcnet.columbia.edu/texts〉.

Hearing

views updated Jun 11 2018

Hearing

Hearing is the ability to collect, process and interpret sound. Sound vibrations travel through air, water, or solids in the form of pressure waves. When a sound wave hits a flexible object such as the eardrum it causes it to vibrate, which begins the process of hearing. The process of hearing involves the conversion of acoustical energy (sound waves ) to mechanical, hydraulic, chemical, and finally electrical energy where the signal reaches the brain and is interpreted.


Sound

The basis of sound is simple: there is a vibrating source, a medium in which sound travels, and a receiver. For humans the most important sounds are those which carry meaning, for example speech and environmental sounds. Sounds can be described in two ways, by their frequency (or pitch), and by their intensity (or loudness).

Frequency (the number of vibrations or sound waves per second) is measured in Hertz (Hz). A sound that is 4,000 Hz (like the sound the letter "F" makes) has 4,000 waves per second. Healthy young adults can hear frequencies between 20 and 20,000 Hz. However, the frequencies most important for understanding speech are between 200 and 8,000 Hz. As adults age, the ability to hear high frequency sounds decreases. An example of a high frequency sound is a bird chirping, while a drum beating is a low frequency sound.

Intensity (loudness) is the amount of energy of a vibration, and is measured in decibels (dB). A zero decibel sound (like leaves rustling in the wind ), can barely be heard by young healthy adults. In contrast, a 120 dB sound (like a jet engine at 7 m [20 ft]) is perceived as very loud and/or painful. Extremes in both loudness and/or pitch may seriously damage the human ear and should be avoided.

The difference between frequency (pitch) and intensity (loudness) can be illustrated using the piano as an analogy. The piano keyboard contains 88 keys which represent different frequencies (or notes). The low frequencies (bass notes) are on the left, the higher frequencies (treble notes) are on the right. Middle C on the keyboard represents approximately 256 Hz. The intensity or loudness of a note depends on how hard you hit the key. A light touch on middle C may produce a 30 dB, 256 Hz note, while a hard strike on middle C may produce a 55 dB, 256 Hz note. The frequency (or note) stays the same, but the intensity or loudness varies as the pressure on the key varies.


Animal hearing

The difference between hearing in humans and animals is often visible externally. For example some animals (e.g. birds ) lack external ears/pinnas, but maintain similar internal structures to the human ear. Although birds have no pinnas they have middle ears and inner ears similar to humans, and like humans, hear best at the frequencies around 2,000 to 4,000 Hz. All mammals (the animals most closely related to the human) have outer ears/pinnas. Many mammals have the ability to move the pinna to help with localization of sounds. Foxes, for example, have large bowl shaped pinnas which can be moved to help locate distant or faint sounds. In addition to sound localization, some animals are able to manipulate their pinnas to regulate body temperature . Elephants do this by using their huge pinnas as fans and for heat exchange.

Human hearing

Human hearing involves a complicated process of energy conversion. This process begins with two ears located at opposite sides of the human head. The ability to use two ears for hearing is called binaural hearing. The primary advantages to binaural hearing are the increased ability to localize sounds and the increased ease of listening in background noise. Sound waves from the world around us enter the ear and are processed and relayed to the brain. The actual process of sound transmission differs in each of the three parts of the human ear (the outer, middle and inner ears).


Outer ear and hearing

The pinna of the outer ear gathers sound waves from the environment and transmits them through the external auditory canal and eardrum to the middle ear. In the process of collecting sounds, the outer ear also modifies the sound. The external ear, or pinna, in combination with the head, can slightly amplify (increase) or attenuate (decrease) certain frequencies. This amplification or attenuation is due to individual differences in the dimensions and contours of the head and pinna.

A second source of sound modification is the external auditory canal. The tube-like canal is able to amplify specific frequencies in the 3,000 Hz region. An analogy would be an opened, half filled soda bottle. When you blow into the bottle there is a sound, the frequency of which depends on the size of the bottle and the amount of space in the bottle. If you empty some of the fluid and blow into the bottle again the frequency of the sound will change. Since the size of the human ear canal is consistent the specific frequency it amplifies is also constant. Sound waves travel through the ear canal until they strike the tympanic membrane (the eardrum). Together, the head, pinna and external auditory canal amplify sounds in the 2,000 to 4,000 Hz range by 10-15 dB. This boost is needed since the process of transmitting sound from the outer ear to the middle ear requires added energy.


Middle ear and hearing

The tympanic membrane or eardrum separates the outer ear from the middle ear. It vibrates in response to
pressure from sound waves traveling through the external auditory canal. The initial vibration causes the membrane to be displaced (pushed) inward by an amount equal to the intensity of the sound, so that loud sounds push the eardrum more than soft sounds. Once the eardrum is pushed inwards, the pressure within the middle ear causes the eardrum to be pulled outward, setting up a back-and-forth motion which begins the conversion and transmission of acoustical energy (sound waves) to mechanical energy (bone movement).

The small connected bones of the middle ear (the ossicles—malleus, incus, and stapes) move as a unit, in a type of lever-like action. The first bone, the malleus, is attached to the tympanic membrane, and the back-and-forth motion of the tympanic membrane sets all three bones in motion. The final result of this bone movement is pressure of the footplate of the last (smallest) bone (the stapes), on the oval window. The oval window is one of two small membranes which allow communication between the middle ear and the inner ear. The lever-like action of the bones amplifies the mechanical energy from the eardrum to the oval window. The energy in the middle ear is also amplified due to the difference in surface size between the tympanic membrane and the oval window, which has been calculated at 14 to 1. The large head of a thumbtack collects and applies pressure and focuses it on the pin point, driving it into the surface. The eardrum is like the head of the thumb tack and the oval window is the pin point. The overall amplification in the middle ear is approximately 25 dB. The conversion from mechanical energy (bone movement) to hydraulic energy (fluid movement) requires added energy since sound does not travel easily through fluids. We know this from trying to hear under water.


Inner ear and hearing

The inner ear is the site where hydraulic energy (fluid movement) is converted to chemical energy (hair cell activity) and finally to electrical energy (nerve transmission). Once the signal is transmitted to the nerve, it will travel up to the brain to be interpreted.

The bone movements in the middle ear cause movement of the stapes footplate in the membrane of the oval window. This pressure causes fluid waves (hydraulic energy) throughout the entire two and a half turns of the cochlea. The design of the cochlea allows for very little fluid movement, therefore the pressure at the oval window is released by the interaction between the oval and round windows. When the oval window is pushed forward by the stapes footplate, the round window bulges outward and vice versa. This action permits the fluid wave motion in the cochlea. The cochlea is the fluid filled, snail shell-shaped coiled organ in the inner ear which contains the actual sense receptors for hearing. The fluid motion causes a corresponding, but not equal, wave-like motion of the basilar membrane. Internally, the cochlea consists of three fluid filled chambers: the scola vestibuli, the scola tympani, and the scala media. The basilar membrane is located in the scala media portion of the cochlea, and separates the scala media from the scala tympani. The basilar membrane holds the key structure for hearing, the organ of Corti. The physical characteristics of the basilar membrane are important, as is its wave-like movement, from base (originating point) to apex (tip). The basilar wave motion slowly builds to a peak and then quickly dies out. The distance the wave takes to reach the peak depends on the speed at which the oval window is moved. For example, high frequency sounds have short wavelengths, causing rapid movements of the oval window, and peak movements on the basilar membrane near the base of the cochlea. In contrast, low frequency sounds have long wavelengths, cause slower movements of the oval window, and peak movements of the basilar membrane near the apex. The place of the peak membrane movements corresponds to the frequency of the sound. Sounds can be "mapped" (or located) on the basilar membrane; high frequency sounds are near the base, middle frequency sounds are in the middle, and low frequency sounds are near the apex. In addition to the location on the basilar membrane, the frequency of sounds can be identified based on the number of nerve impulses sent to the brain.

The organ of Corti lies upon the basilar membrane and contains three to five outer rows (12,000 to 15,000 hair cells) and one inner row (3,000) of hair cells. The influence of the inner and outer hair cells has been widely researched. The common view is that the numerous outer hair cells respond to low intensity sounds (quiet sounds, below 60 dB). The inner hair cells act as a booster, by responding to high intensity, louder sounds. When the basilar membrane moves, it causes the small hairs on the top of the hair cells (called stereocilia) to bend against the overhanging tectorial membrane. The bending of the hair cells causes chemical actions within the cell itself creating electrical impulses (action potentials) in the nerve fibers attached to the bottom of the hair cells. The nerve impulses travel up the nerve to the temporal lobe of the brain. The intensity of a sound can be identified based on the number of hair cells affected and the number of impulses sent to the brain. Loud sounds cause a large number of hair cells to be moved, and many nerve impulses to be transmitted to the brain.

Each of the separate nerve fibers join and travel to the lowest portion of the brain, the brainstem. Nerves from the vestibular part (balance part) of the inner ear combine with the cochlear nerves to form the VIII cranial nerve (auditory or vestibulocochlear nerve). Once the nerve impulses enter the brainstem, they follow an established pathway, known as the auditory pathway. The organization within the auditory pathway allows for a large amount of cross-over. "Cross-over" means that the sound information (nerve impulses) from one ear do not travel exclusively to one side of the brain. Some of the nerve impulses cross-over to the opposite side of the brain. The impulses travel on both sides (bilaterally) up the auditory pathway until they reach a specific point in the temporal lobe called Heschl's gyrus. Crossovers act like a safety net. If one side of the auditory pathway is blocked or damaged, the impulses can still reach Hes chl's gyrus to be interpreted as sound.

See also Neuron.


Resources

books

Mango, Karin. Hearing Loss. New York: Franklin Watts, 1991.

Martin, Frederick. Introduction to Audiology. 6th ed. Boston: Allyn and Bacon, 1997.

Moller, Aage R. Sensory Systems: Anatomy and Physiology. New York: Academic Press, 2002.

Rahn, Joan. Ears, Hearing and Balance. New York: Antheneum, 1984.

Simko, Carole. Wired for Sound. Washington, DC: Kendall Green Publications, 1986.

Sundstrom, Susan. Understanding Hearing Loss and What Can Be Done. Illinois: Interstate Publishers, 1983.

periodicals

Mestel, Rosie. "Pinna To the Fore." Discover 14 (June, 1993): 45-54.


Kathryn Glynn

KEY TERMS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Amplify

—To increase.

Attenuate

—To decrease.

Bilateral

—Both sides of an object divided by a line or plane of symmetry.

Binaural

—Both ears.

Decibel

—A unit of measurement of the intensity of sound, abbreviated dB.

Equilibrium

—Balance, the ability to maintain body position.

Frequency

—Pitch, the number of vibrations (or sound waves) per second.

Hertz

—A unit of measurement for frequency, abbreviated Hz. One hertz is one cycle per second.

Hydraulic

—Fluid in motion.

Intensity

—Loudness, the amount of energy of a vibration or sound wave.

Localization

—Ability to identify where a sound is coming from.

Hearing

views updated May 14 2018

Hearing


Hearing is the sense that enables an organism to detect sound waves. It serves mainly to allow an animal to detect danger, to locate its prey, to communicate, and even to express emotion. As one of the five human senses, the ability to hear is an especially important sense because of its connection to human speech and language.

To understand what hearing is about, it is important to understand the nature of sound. Sound needs air to be heard. This means that in a vacuum there is no sound because no air is present. This was proven in the seventeenth century when the English chemist Robert Boyle (1627–1691) placed a bell inside an airtight jar and gradually withdrew all the air as he rang the bell. When all the air was gone, the ringing bell made no sound at all. This is because sound is created by waves of pressure or vibrations that happen when something disturbs the air. It is much like how a pebble thrown into still water creates an increasing ring of ripples. When a noise is made, it disturbs the air and a sound wave begins as the air vibrates from the original noise. Without air to travel through, sound does not exist. A drawing of a sound wave would therefore look like a wavy line.

Only vertebrates (animals with a backbone) and some insects have the ability to hear. This means that only these types of animals have special organs to receive and then interpret these vibrations of the air. Just as smell and taste use chemoreceptors (a nerve cell that responds to chemical stimuli) to detect dissolved or airborne chemicals, and touch and sight use tactile (touch) and visual receptors, hearing employs auditory receptors. Each sense has a specialized type of receptor that is geared to respond to a certain type of stimulus. Most vertebrates have a system that enables them to detect sound waves and then to convert them into nerve impulses that the brain identifies.

HEARING IN MAMMALS

Hearing reached its highest level of development in mammals, a class that includes humans. The human ear is a complicated organ that serves as a good model of how animals hear. Humans and most other mammals have an outer, middle, and inner ear.

The Outer Ear. The outer, or external ear, is called the pinna and is the part that is outside of the head. Its shape is designed to catch and direct sound inwards. Mammals have two ears so they can better locate the direction of a sound. The brain actually calculates the location by comparing

the speed at which the sound reaches each ear, since it will reach the ear closer to the sound first. As the outer ear collects the sound, it funnels it into a passageway called the auditory canal and then on through to the tympanic membrane or eardrum. This delicate, tight piece of tissue is like the tight skin on a drum. When the sound, made up of certain vibrations, reaches the eardrum, it causes it to vibrate.

The Middle Ear. Beyond the eardrum is the middle ear, and here the vibrations are transferred to three bones inside of it called the hammer, the anvil, and the stirrup. The eardrum makes the hammer vibrate. Like a chain reaction, the hammer makes the anvil vibrate. This makes the stirrup do the same, each time increasing the intensity of the sound.

The Inner Ear. The stirrup then vibrates against the inner ear's oval window, which covers a snail-shaped structure called the cochlea. The cochlea is filled with fluid, which helps the body keep its balance, and is covered with tiny hair cells. It is here that the vibrations somehow get changed into nerve impulses. Once the mechanical vibrations are converted into electrical impulses, they travel through the auditory nerve to the brain's cerebral cortex. There they are interpreted as sounds. The type of sound sensed by the brain depends on which hair cells are triggered.

The Eustachian Tube. It is important for the pressure on both sides of the eardrum to be equal, so the ear has mechanism to do this. Called the Eustachian tube, this tube connects the middle ear with the throat. It is not a permanently open tube but works like a valve that opens and closes as necessary. We experience this pressure imbalance when our ears feel uncomfortable on an airplane and our hearing seems to fade. We also experience the "pop" as we yawn or swallow and suddenly equalize the pressure.

The Cochlea. The ear's cochlea also functions to help people keep their balance as we move about. The fluid inside the cochlea moves and shifts and tells the brain about the body's position. Spinning about makes someone dizzy afterwards because this fluid keeps sloshing about and does not stop all at once. The brain thus receives confusing and chaotic signals until the fluid slows and stops sloshing.

HEARING IN INVERTEBRATES

Most invertebrates (animals without a backbone) do not have specific receptors for detecting these air vibrations that produce sound, and most have no hearing. They do feel the vibrations of the air, water, or soil in which they live. Insects are an exception to the above, since crickets, grasshoppers, katydids, cicadas, butterflies, moths, and flies are capable of hearing. Crickets and katydids have receptors called tympanic membranes, which are similar to ears, on their legs, and grasshoppers and cicadas have them on their abdomen. The tympanic membrane of grasshoppers and crickets are known to function much the way the human eardrum does.

ECHOLOCATION

Some mammals can move their outer ears to better focus on a sound. Certain deep-diving marine mammals like seals and dolphins can close off their ear canals when submerged. Bats and dolphins use echolocation as a way of sensing their surroundings. They send out a sound and then listen for the echo as it bounces off an object. Both can determine not only how far they are from the object, but their brains can actually analyze the echo pattern and form an image of it.

HEARING LOSS

Hearing in humans can be impaired or lost altogether. The most common reason for hearing loss is a stiffening and eventual death of the important hair cells in the inner ear. Loud noises make this happen. It is estimated that the average person has lost more than 40 percent of his or her hair cells by the age of sixty-five. Loudness is measured in decibels, and ears can be permanently damaged by two to three hours of exposure to 90 decibels. Since the music at some rock concerts is often as high as 130 decibels, it is not surprising that some rock musicians and their fans have diminished hearing.

Since communication is such an important part of being human, the loss of hearing is an especially isolating disability for many people. Helen Keller (1880–1968), who became blind and deaf during infancy, said that her lack of sight was nothing compared to how her deafness isolated her from others.

Hearing

views updated May 18 2018

Hearing

The ability to perceive sound.

The ear, the receptive organ for hearing, has three major parts: the outer, middle, and inner ear. The pinna or outer earthe part of the ear attached to the head, funnels sound waves through the outer ear. The sound waves pass down the auditory canal to the middle ear, where they strike the tympanic membrane, or eardrum, causing it to vibrate. These vibrations are picked up by three small bones (ossicles) in the middle ear named for their shapes: the malleus (hammer), incus (anvil), and stapes (stirrup). The stirrup is attached to a thin membrane called the oval window, which is much smaller than the eardrum and consequently receives more pressure.

As the oval window vibrates from the increased pressure, the fluid in the coiled, tubular cochlea (inner ear) begins to vibrate the membrane of the cochlea (basilar membrane) which, in turn, bends fine, hairlike cells on its surface. These auditory receptors generate miniature electrical forces which trigger nerve impulses that then travel via the auditory nerve, first to the thalamus and then to the primary auditory cortex in the temporal lobe of the brain . Here, transformed into auditory but meaningless sensations, the impulses are relayed to association areas of the brain which convert them into meaningful sounds by examining the activity patterns of the neurons, or nerve cells, to determine sound frequencies. Although the ear changes sound waves into neural impulses, it is the brain that actually "hears," or perceives the sound as meaningful.

The auditory system contains about 25,000 cochlear neurons that can process a wide range of sounds. The sounds we hear are determined by two characteristics of sound waves: their amplitude (the difference in air pressure between the peak and baseline of a wave) and their frequency (the number of waves that pass by a given point every second). Loudness of sound is influenced by a complex relationship between the wavelength and amplitude of the wave; the greater the amplitude, the faster the neurons fire impulses to the brain, and the louder the sound that is heard. Loudness of sound is usually expressed in decibels (dB). A whisper is about 30 dB, normal conversation is about 60 dB, and a subway train is about 90 dB. Sounds above 120 dB are generally painful

DECIBEL RATINGS AND HAZARDOUS LEVEL OF NOISE
Decibel levelExample of sounds
Above 110 decibels, hearing may become painful.
Above 120 decibels is considered deafening.
Above 135, hearing will become extremely painful and hearing loss may result if exposure is prolonged.
Above 180, hearing loss is almost certain with any exposure.
30Soft whisper
35Noise may prevent the listener from falling asleep
40Quiet office noise level
50Quiet conversation
60Average television, sewing machine, lively conversation
70Busy traffic, noisy restaurant
80Heavy city traffic, factory noise, alarm clock
90Cocktail party, lawn mower
100Pneumatic drill
120Sandblasting, thunder
140Jet airplane
180Rocket launching pad

to the human ear. The loudest rock band on record was measured at 160 dB.

Pitch (how high or low a tone sounds) is a function of frequency. Sounds with high frequencies are heard as having a high pitch; those with low frequencies are heard as low-pitched. The normal frequency range of human hearing is 20 to 20,000 Hz. Frequencies of some commonly heard sounds include the human voice (120 to approximately 1,100 Hz), middle C on the piano (256 Hz), and the highest note on the piano (4,100 Hz). Differences in frequency are discerned, or coded, by the human ear in two ways, frequency matching and place. The lowest sound frequencies are coded by frequency matching, duplicating the frequency with the firing rate of auditory nerve fibers. Frequencies in the low to moderate range are coded both by frequency matching and by the place on the basilar membrane where the sound wave peaks. High frequencies are coded solely by the placement of the wave peak

Loss of hearing can result from conductive or sensorineural deafness or damage to auditory areas of the brain. In conductive hearing loss, the sound waves are unable to reach the inner ear due to disease or obstruction of the auditory conductive system (the external auditory canal; the eardrum, or tympanic membrane; or structures and spaces in the middle ear). Sensorineural hearing loss refers to two different but related types of impairment, both affecting the inner ear. Sensory hearing loss involves damage, degeneration, or developmental failure of the hair cells in the cochlea's organ of Corti, while neural loss involves the auditory nerve or other parts of the cochlea. Sensorineural hearing loss occurs as a result of disease, birth defects, aging , or continual exposure to loud sounds. Damage to the auditory areas of the brain through severe head injury, tumors, or strokes can also prevent either the perception or the interpretation of sound.

Further Reading

Davis, Lennard J. Enforcing Normalcy: Disability, Deafness, and the Body. New York: Verso, 1995.

Hearing

views updated May 18 2018

Hearing

Hearing is the process by which humans, using ears, detect and perceive sounds. Sounds are pressure waves transmitted through some medium, usually air or water. Sound waves are characterized by frequency (measured in cycles per second, cps, or hertz, Hz) and amplitude, the size of the waves. Low-frequency waves produce low-pitched sounds (such as the rumbling sounds of distant thunder) and high-frequency waves produce high-pitched sounds (such as a mouse squeak). Sounds audible to most humans range from as low as 20 Hz to as high as 20,000 Hz in a young child (the upper range especially decreases with age). Loudness is measured in decibels (dB), a measure of the energy content or power of the waves proportional to amplitude. The decibel scale begins at 0 for the lowest audible sound, and increases logarithmically, meaning that a sound of 80 db is not just twice as loud as a sound of 40 db, but has 10,000 times more power! Sounds of 100 db are so intense that they can severely damage the inner ear, as many jack-hammer operators and rock stars have discovered.

The ear is a complex sensory organ, divided into three parts: external (outer) ear, middle ear, and inner ear. The outer and middle ear help to protect and maintain optimal conditions for the hearing process and to direct the sound stimuli to the actual sensory receptors, hair cells, located in the cochlea of the inner ear.

Outer Ear and Middle Ear

The most visible part of the ear is the pinna, one of two external ear structures. Its elastic cartilage framework provides flexible protection while collecting sound waves from the air (much like a funnel or satellite dish); the intricate pattern of folds helps prevent the occasional flying insect or other particulate matter from entering the ear canal, the other external ear component. The ear (auditory) canal directs the sound to the delicate eardrum (tympanic membrane), the boundary between external and middle ear. The ear canal has many small hairs and is lined by cells that secrete ear wax (cerumen), another defense to keep the canal free of material that might block the sound or damage the delicate tympanic membrane.

The middle ear contains small bones (auditory ossicles) that transmit sound waves from the eardrum to inner ear. When the sound causes the eardrum to vibrate, the malleus (hammer) on the inside of the eardrum moves accordingly, pushing on the incus (anvil), which sends the movements to the stapes (stirrup), which in turn pushes on fluid in the inner ear, through an opening in the cochlea called the oval window. Small muscles attached to these ossicles prevent their excessive vibration and protect the cochlea from damage when a loud sound is detected (or anticipated). Another important middle ear structure is the auditory (eustachian) tube, which connects the middle ear to the pharynx (throat). For hearing to work properly, the pressure on both sides of the eardrum must be equal; otherwise, the tight drum would not vibrate. Therefore, the middle ear must be connected to the outside.

Sometimes, when there are sudden changes in air pressure, the pressure difference impairs hearing and causes pain. In babies and many young people, fluid often builds up in the middle ear and pushes on the eardrum. The stagnant fluids can also promote a bacterial infection of the middle ear, called otitis media (OM). OM also occurs when upper respiratory infections (colds and sore throats) travel to the middle ear by way of the auditory tube. Sometimes the pressure can be relieved only by inserting drainage tubes in the eardrum.

Inner Ear

The inner ear contains the vestibule, for the sense of balance and equilibrium, and the cochlea, which converts the sound pressure waves to electrical impulses that are sent to the brain. The cochlea is divided into three chambers, or ducts. The cochlear duct contains the hair cells that detect sound. It is sandwiched between the tympanic and vestibular ducts, which are interconnected at the tip. These ducts form a spiral, giving the cochlea a snail shell appearance. Inside the cochlear duct, the hair cells are anchored on the basilar membrane, which forms the roof of the vestibular duct. The tips of the hair cells are in contact with the tectorial membrane, which forms a sort of awning. When the stapes pushes on the fluid of the inner ear, it creates pressure waves in the fluid of the tympanic and vestibular ducts (like kicking the side of a wading pool). These waves push the basilar membrane up and down, which then pushes the hair cells against the tectorial membrane, bending the "hairs" (stereocilia). When stereocilia are bent, the hair cell is excited, creating impulses that are transmitted to the brain.

How does the cochlea differentiate between sounds of different pitches and intensities? Pitch discrimination results from the fact that the basilar membrane has different vibrational properties along its length, such that the base (nearest the oval window) vibrates most strongly to high frequency sounds, and the tip to low frequencies. The hair cells along the length of the cochlea each make their own connection to the brain, just like the keys on an electric piano are each wired for a certain note. Loud (high-amplitude) sounds cause the basilar membrane to vibrate more vigorously than soft (low-amplitude) sounds. The brain thus distinguishes loud from soft sounds by differences in the intensity of nerve signaling from the cochlea.

Hair cells themselves do not make the impulses that are transmitted to the central nervous system (CNS); they stimulate nerve fibers to which they are connected. These nerve fibers form the cochlear branch of the eighth cranial (vestibulocochlear) nerve. In the CNS, the information is transmitted both to the brainstem, which controls reflex activity, and to the auditory cortex, where perception and interpretation of the sound occur. By comparing inputs from two ears, the brain can interpret the timing of sounds from right and left to determine the location of the sound source. This is called binaural hearing.

see also Brain; Neuron

Harold J. Grau

Bibliography

Carmen, Richard. Our Endangered Hearing. Emmaus, PA: Rodale Press, 1977.

Stebbins, William C. The Acoustic Sense of Animals. Cambridge, MA: Harvard University Press, 1983.

Stevens, S. S., and Fred Warshofsky. Sound and Hearing (Life Science Library). New York: Time, 1967.

Hearing

views updated Jun 27 2018

HEARING

A legal proceeding where an issue of law or fact is tried and evidence is presented to help determine the issue.

Hearings resemble trials in that they ordinarily are held publicly and involve opposing parties. They differ from trials in that they feature more relaxed standards of evidence and procedure, and take place in a variety of settings before a broader range of authorities (judges, examiners, and lawmakers). Hearings fall into three broad categories: judicial, administrative, and legislative. Judicial hearings are tailored to suit the issue at hand and the appropriate stage at which a legal proceeding stands. Administrative hearings cover matters of rule making and the adjudication of individual cases. Legislative hearings occur at both the federal and state levels and are generally conducted to find facts and survey public opinion. They encompass a wide range of issues relevant to law, government, society, and public policy.

Judicial hearings take place prior to a trial in both civil and criminal cases. Ex parte hearings provide a forum for only one side of a dispute, as in the case of a temporary restraining order, whereas adversary hearings involve both parties. Preliminary hearings, also called preliminary examinations, are conducted when a person has been charged with a crime. Held before a magistrate or judge, a preliminary hearing is used to determine whether the evidence is sufficient to justify detaining the accused or discharging the accused on bail. Closely related are detention hearings, which can also determine whether to detain a juvenile. Suppression hearings take place before trial at the request of an attorney seeking to have illegally obtained or irrelevant evidence kept out of trial.

Administrative hearings are conducted by state and federal agencies. Rule-making hearings evaluate and determine appropriate regulations, and adjudicatory hearings try matters of fact in individual cases. The former are commonly used to garner opinion on matters that affect the public—as, for example, when the environmental protection agency (EPA) considers changing its rules. The latter commonly take place when an individual is charged with violating rules that come under the agency's jurisdiction—for example, violating a pollution regulation of the EPA, or, if incarcerated, violating behavior standards set for prisoners by the Department of Corrections.

Some blurring of this distinction occurs, which is important given the generally more relaxed standards that apply to some administrative hearings. The degree of formality required of an administrative hearing is determined by the liberty interest at stake: the greater that interest, the more formal the hearing. Notably, rules limiting the admissibility of evidence are looser in administrative hearings than in trials. Adjudicatory hearings can admit, for example, hearsay that generally would not be permitted at trial. (Hearsay is a statement by a witness who does not appear in person, offered by a third party who does appear.) The Administrative Procedure Act (APA) (5 U.S.C.A. § 551 et seq.) governs administrative hearings by federal agencies, and state laws largely modeled upon the APA govern state agencies. These hearings are conducted by a civil servant called a hearing examiner at the state level and known as an administrative law judge at the federal level.

Legislative hearings occur in state legislatures and in the U.S. Congress, and are a function of legislative committees. They are commonly public events, held whenever a lawmaking body is contemplating a change in law, during which advocates and opponents air their views. Because of their controversial nature, they often are covered extensively by the media.

Not all legislative hearings consider changes in legislation; some examine allegations of wrongdoing. Although lawmaking bodies do not have a judicial function, they retain the power to discipline their members, a key function of state and federal ethics committees. Fact finding is ostensibly the reason for turning congressional hearings into public scandals. Often, however, critics will argue that these hearings are staged for attacking political opponents. Throughout the twentieth century, legislative hearings have been used to investigate such things as allegations of Communist infiltration of government and industry (the House Un-American Activities Committee hearings) and abuses of power by the executive branch (the watergate and whitewater hearings).

cross-references

Administrative Law and Procedure.