Auditory System Structure and Processing (6A)

Help Questions

MCAT Psychological and Social Foundations › Auditory System Structure and Processing (6A)

Questions 1 - 10
1

Researchers tested sound localization while participants wore an insert earplug in the right ear that attenuated incoming sound by ~15 dB without distorting frequency content. When brief clicks were presented from speakers positioned in the horizontal plane, participants systematically reported the clicks as originating more toward the left than the true source, especially for sources near the midline. Based on the study setup, which conclusion about auditory processing is most consistent?

The bias reflects reliance on retinal disparity cues, with reduced right-ear input mimicking a leftward visual parallax signal

The bias reflects reliance on interaural level differences, with the attenuated right-ear input shifting perceived location toward the left

The bias is most consistent with enhanced interaural time differences at the right ear, making sounds appear leftward

The bias occurs because the cochlea in the left ear amplifies low frequencies after unilateral attenuation, pulling perceived location leftward

Explanation

This question tests understanding of binaural cues for sound localization, specifically interaural level differences (ILDs). The auditory system uses ILDs to determine sound source location by comparing the intensity of sounds reaching each ear - sounds appear to come from the side with greater intensity. When the right ear is attenuated by 15 dB with an earplug, sounds reaching that ear are quieter than those reaching the left ear, creating an artificial ILD that biases perception leftward. The correct answer (D) accurately identifies this as reliance on interaural level differences causing the leftward shift. Answer choice B incorrectly invokes retinal disparity, which is a visual depth cue unrelated to auditory localization. To avoid confusion between sensory systems, remember that auditory localization uses interaural time differences (ITDs) and interaural level differences (ILDs), while visual depth perception uses retinal disparity and convergence. When one ear receives attenuated input, the brain interprets this as the sound source being on the opposite side.

2

In a dichotic listening experiment, two different syllables are presented simultaneously, one to each ear. When participants are instructed to shadow (repeat) the syllable presented to the right ear, they are accurate; when instructed to shadow the left ear, accuracy drops, even though overall hearing thresholds are normal. Based on the task demands, which conclusion about auditory processing is most consistent?

The pattern is most consistent with contralateral auditory projections and left-hemisphere language specialization favoring right-ear input for speech

The pattern is most consistent with the right ear having a denser population of rods and cones, improving phoneme detection

The pattern indicates that the left ear projects ipsilaterally to the left hemisphere, creating interference during right-ear shadowing

The pattern is best explained by increased interaural time differences in the left ear, which disrupt temporal integration of syllables

Explanation

This question tests understanding of hemispheric specialization and contralateral auditory projections. The auditory system has stronger contralateral than ipsilateral projections, meaning sounds from the right ear project more strongly to the left hemisphere. Since language processing is typically left-hemisphere dominant, the right ear has an advantage for speech perception due to this more direct pathway. The correct answer (D) accurately describes contralateral projections and left-hemisphere language specialization favoring right-ear input. Answer choice B incorrectly mentions rods and cones, which are photoreceptors in the retina for vision, not components of the auditory system. To avoid this error, remember that rods and cones are exclusively visual structures, while the auditory system uses hair cells for transduction. The right-ear advantage for speech reflects neural pathway organization, not sensory receptor differences. When you see lateralized performance differences in dichotic listening, consider hemispheric specialization and projection pathways.

3

Participants sat in a quiet chamber while a tone at 1000 Hz played continuously for 3 minutes at a comfortable level. Immediately afterward, a brief 1000 Hz probe tone was presented at the same physical intensity. Participants rated the probe as softer than an identical probe presented without prior continuous exposure, despite no change in stimulus parameters. Which statement best explains the auditory phenomenon described?

The effect is due to photoreceptor bleaching, which reduces sensitivity to the “brightness” of the 1000 Hz probe

The effect is best explained by increased ossicle stiffness over minutes, selectively amplifying 1000 Hz vibrations

Auditory neurons responsive to the adapted frequency show reduced responsiveness, decreasing perceived loudness of the subsequent probe

The probe is perceived as louder because adaptation increases gain to maintain constant perceived intensity

Explanation

This question tests understanding of auditory adaptation, a phenomenon where prolonged exposure to a stimulus reduces neural responsiveness. When auditory neurons are continuously stimulated at 1000 Hz for 3 minutes, they become less responsive to that specific frequency, leading to decreased perceived loudness when the same frequency is presented again. This frequency-specific adaptation explains why the probe tone sounds softer despite having identical physical properties. The correct answer (A) accurately describes reduced responsiveness in frequency-specific neurons leading to decreased perceived loudness. Answer choice C incorrectly invokes photoreceptor bleaching, which is a visual phenomenon involving light-sensitive proteins in the retina, not an auditory process. To avoid cross-modal confusion, remember that adaptation occurs within each sensory system using system-specific mechanisms: photoreceptor bleaching for vision, neural fatigue for audition. When you see prolonged exposure followed by altered perception at the same frequency, think auditory adaptation.

4

A lab investigates an auditory illusion using two alternating tones presented over headphones: Tone 1 is 500 Hz to the left ear and 1500 Hz to the right ear; Tone 2 swaps the frequencies across ears at a rate of 2 swaps per second. Many participants report hearing a single tone that “jumps” between ears rather than two tones swapping pitch. Which statement best explains the auditory phenomenon described?

The illusion is best explained by binocular rivalry, in which competing visual inputs alternate dominance across eyes

The illusion occurs because the semicircular canals encode frequency changes and misattribute them to lateral position

The illusion reflects auditory grouping that prioritizes spatial continuity, leading perceived location to dominate over veridical pitch-ear pairing

The illusion indicates that pitch is computed exclusively in the middle ear, so swapping input ears forces location to be inferred incorrectly

Explanation

This question tests understanding of auditory scene analysis and perceptual grouping principles. The auditory system uses various cues to group sounds into coherent streams, and spatial continuity is a powerful grouping principle that can override frequency information. In this illusion, the brain prioritizes maintaining a spatially coherent percept (sound staying in one location) over accurately tracking which frequency is in which ear, resulting in the perception of a single jumping tone rather than two swapping tones. The correct answer (B) explains that auditory grouping prioritizes spatial continuity over veridical pitch-ear pairing. Answer choice C incorrectly attributes the phenomenon to semicircular canals, which are vestibular organs that detect head rotation, not auditory frequency. To avoid confusing auditory and vestibular systems, remember that the cochlea processes sound while semicircular canals process rotational movement. When analyzing auditory illusions, consider how grouping principles like spatial continuity can override other perceptual features.

5

In a psychoacoustics study, participants listened to a 2-second complex tone containing equal-energy components at 250 Hz and 4000 Hz. Over repeated trials, the 4000 Hz component was gradually reduced in intensity until the listener reported hearing “only a low hum,” despite the 250 Hz component remaining unchanged. This shift occurred more quickly when the tone was presented at moderate overall intensity than at near-threshold intensity. Which statement best explains the auditory phenomenon described?

Repeated exposure strengthened top-down expectations, increasing perceived intensity of the 4000 Hz component despite attenuation

Neural adaptation in frequency-specific auditory pathways decreased sensitivity to the higher-frequency component with repeated stimulation

Reduced middle-ear ossicle movement selectively attenuated high frequencies, making the 4000 Hz component mechanically disappear over trials

Opponent-process coding in the retina reduced perceived “brightness” of the 4000 Hz component, biasing reports toward low pitch

Explanation

This question tests understanding of frequency-specific neural adaptation in the auditory system. Neural adaptation occurs when auditory neurons become less responsive to sustained or repeated stimulation at specific frequencies, leading to decreased sensitivity over time. In this scenario, the 4000 Hz component gradually becomes less perceptible due to adaptation in high-frequency-tuned neurons, while the 250 Hz component remains audible because its corresponding neurons are not adapting. The correct answer (B) explains that repeated stimulation causes frequency-specific pathways to adapt, reducing sensitivity to the higher frequency component. Answer choice A incorrectly suggests mechanical attenuation by ossicles, which would affect all frequencies and wouldn't explain the gradual change over trials. To avoid this type of error, remember that neural adaptation is frequency-specific and occurs centrally, while mechanical changes in the middle ear affect broad frequency ranges. When you see gradual perceptual changes with repeated stimulation at specific frequencies, think neural adaptation rather than mechanical factors.

6

A researcher presents brief tones at 200 Hz and 6000 Hz at equal sound pressure levels and asks participants to rate perceived pitch and clarity. Participants reliably distinguish both pitches, but report the 6000 Hz tone as “thin” and more easily masked by a low-level background noise. The researcher notes that participants with a history of noise exposure show a larger effect. Which statement best explains the auditory phenomenon described?

High-frequency tones are encoded by the cochlear apex, which is shielded from noise exposure, so clarity should improve with exposure history

The effect is best explained by decreased pupil diameter during high-frequency listening, which reduces auditory input gain

High-frequency perception depends on cochlear regions that are more vulnerable to noise-related damage, reducing effective encoding and increasing susceptibility to masking

The effect occurs because background noise increases the speed of sound, shifting 6000 Hz into the infrasonic range

Explanation

This question tests understanding of frequency-dependent vulnerability in the cochlea and masking effects. High-frequency regions of the cochlea (the base) are more susceptible to noise-induced damage than low-frequency regions, making high-frequency perception more vulnerable to degradation. Additionally, high-frequency tones have narrower critical bands and are more easily masked by background noise, explaining why the 6000 Hz tone seems "thin" and easily obscured. The correct answer (A) explains that high-frequency cochlear regions are more vulnerable to damage, reducing encoding effectiveness and increasing masking susceptibility. Answer choice B contains the anatomical error that high frequencies are encoded at the apex - they're actually encoded at the base. To remember cochlear anatomy, use the mnemonic "high at the base, low at the apex" - opposite to what might seem intuitive. When evaluating frequency-specific vulnerabilities, consider both the anatomical location in the cochlea and the inherent masking properties of different frequencies.

7

In a clinical screening, a patient reports that speech sounds “muffled,” but pure tones at 250–500 Hz are detected at near-normal levels. Thresholds for 4000–8000 Hz are markedly elevated, and the patient especially struggles to detect brief consonant-like bursts. Tympanometry is normal. Which outcome would be expected if the cochlear region most responsible for encoding high frequencies is damaged?

Improved detection of high-frequency bursts due to compensatory recruitment of apical hair cells tuned to low frequencies

Selective elevation of high-frequency thresholds consistent with reduced transduction where the basilar membrane responds maximally to high frequencies

Uniform threshold elevation across all frequencies due to impaired lens accommodation limiting auditory focus

Selective loss of low-frequency perception because high-frequency regions are located at the cochlear apex

Explanation

This question tests understanding of tonotopic organization in the cochlea and frequency-specific hearing loss. The cochlea is organized tonotopically, with high frequencies processed at the base (near the oval window) and low frequencies at the apex - this is opposite to what some students assume. When the basal region is damaged, high-frequency hearing is selectively impaired while low-frequency hearing remains intact, explaining why speech sounds muffled (loss of high-frequency consonants) while low pure tones are detected normally. The correct answer (B) accurately describes selective elevation of high-frequency thresholds due to damage where the basilar membrane responds maximally to high frequencies. Answer choice D contains the common misconception that high frequencies are processed at the apex, when they're actually processed at the base. To remember cochlear organization, think "base = high" and "apex = low" - the base handles the high-pitched sounds. When you see frequency-specific hearing loss, map it to the corresponding cochlear region.

8

In a study of cochlear frequency selectivity, investigators presented a 3000 Hz tone and measured detection thresholds while adding narrowband noise centered either at 3000 Hz or at 800 Hz. Detection thresholds increased markedly with noise centered at 3000 Hz but only slightly with noise at 800 Hz. Which statement best explains the auditory phenomenon described?

Masking is strongest when noise activates the same cochlear frequency channel as the target tone

Masking reflects visual crowding, so noise near 3000 Hz interferes more because it is closer to the fovea

Noise at 800 Hz increases threshold because it reverses ossicle motion, preventing the stapes from moving at 3000 Hz

Masking is strongest when noise activates a distant cochlear region because the brain averages across the entire basilar membrane

Explanation

This question examines cochlear frequency selectivity and masking principles. Masking is maximal when masker and target overlap in cochlear channels, elevating thresholds via overlapping excitation. Greater threshold increase for 3000 Hz noise indicates stronger masking in matched channels. Choice D accurately describes this channel-specific masking. Choice B errs by suggesting distant regions mask more via averaging, ignoring tonotopic selectivity. Sidestep by remembering masking peaks at frequency overlap. Check if effect strength correlates with frequency proximity, confirming selectivity.

9

To investigate an auditory illusion in speech perception, researchers presented the same ambiguous consonant sound embedded in two different vowel contexts. Participants reliably reported hearing different consonants depending on the surrounding vowels, despite identical acoustic input for the consonant segment. The team argued that perception reflected context-driven interpretation rather than peripheral encoding differences. Which statement best explains the auditory phenomenon described?

The illusion requires vestibular input to resolve whether the consonant occurred before or after the vowel

Color constancy mechanisms in the visual system recalibrate auditory categories when vowels change

Top-down phonemic restoration and contextual expectations can bias categorization of an acoustically ambiguous segment

The basilar membrane changes its tonotopic map based on adjacent vowels, physically shifting the consonant’s frequency components

Explanation

This question explores auditory illusions in speech perception and contextual effects. Phonemic restoration involves top-down processes filling in or biasing ambiguous sounds based on context, leading to different percepts despite identical acoustics. Varying vowel contexts alter categorization of the same consonant, reflecting expectation-driven interpretation. Choice D correctly attributes this to phonemic restoration and context biasing. Choice B mistakenly implies physical basilar membrane shifts, confusing central processing with peripheral mechanics. Sidestep this by recognizing illusions as central when acoustics are identical. Check if context changes perception without acoustic variation, indicating top-down influence.

10

In a sound localization study, participants localized brief broadband noise bursts while turning their heads slowly. When head movement was allowed, front–back confusions decreased compared with trials where participants kept their heads still. The speaker positions were otherwise identical. Which statement best explains the auditory phenomenon described?

Head movement increases interaural time differences for all sources equally, eliminating the need for spectral cues

Front–back confusions are resolved by vergence eye movements, which provide depth cues to the auditory cortex

Dynamic changes in binaural and spectral cues during head movement help disambiguate front–back location

Turning the head mechanically amplifies the cochlea, increasing loudness and thereby improving localization accuracy

Explanation

This question assesses sound localization and dynamic cues. Head movements generate changing binaural and spectral cues, aiding disambiguation of ambiguous positions like front-back. Allowed movement reduces confusions by providing motion-induced cue variations. Choice D correctly explains dynamic cues resolving ambiguities. Choice B wrongly states movements equalize ITDs, ignoring disambiguation role. For similar questions, consider if motion adds information. Verify if static conditions increase errors, highlighting dynamic benefits.

Page 1 of 6