Wednesday, October 2, 2019

Sound :: essays research papers

Basis of Processing Sound Strategies Introduction to Coding Strategies: D.J. Allum Coding strategies define the way in which acoustic sounds in our world are transformed into electrical signals that we can understand in our brain. The normal-hearing person already has a way to code acoustic sounds when the inner ear (cochlear) is functioning. The cochlea is the sensory organ that transforms acoustic signals into electrical signals. However, a deaf person does not have a functioning cochlea. The cochlear implant takes over its function. Technically, it is relatively easy to send electrical current through implanted electrodes. The more difficult part is to make the electrical signals carry the appropriate information about speech and other sounds. This responsibility is taken over by coding strategies. The more efficient the coding strategy, the better the possibility that the brain will interpret the information as having meaning. Without meaning, sound is only unwanted noise. Some basic vocabulary is useful in understanding coding strategies: Frequency. Speech is composed of a range of frequencies from high-frequency sounds (sss, piii) to low-frequency sounds (ah). These frequencies also occur for sounds in our environment. The speech-frequency range is from about 250 Hz to 6,000 Hertz (Hz). Amplitude. The amount of amplitude, or intensity, defines how loud a sound is heard. The usual range from the softest to the loudest sound is about 30 dB. The normal range for human hearing is around 120 dB. Tonotopic. A special characteristic of the cochlea and the auditory nerve. It means that the apical region of the cochlea (and the nerve near this region) is more sensitive to low frequencies and that the basal region is more sensitive to high-frequencies. The relationship between the most basal to the most apical region is a progression from high-to-low frequency sensitivity. Filters. Filters are used to divide, electronically, acoustic signals into different ranges. For instance, for a speech-frequency range of 4,000 Hz, we could divide the total range by 10 and each filter would hold 400 Hz. Stimulation Rate. The number of times an electrode is turned on and off, i.e., activated with electrical stimulation. The normal cochlea is like a series of filters. Sounds that have high-frequencies will fall into filters at the basal end of the cochlea and those with low-frequencies will fall into filters in the apical end, i.e., in a tonotopic arrangement. Since the cochlea cannot accomplish this for a deaf person, the cochlear implant takes its place. It is important to remember that the auditory nerve is tonotopic even if the cochlea cannot transmit information because of deafness.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.