Along the lines of Marcus' comments, I feel that there is quite a bit to say about couplings: device to the cochlea, cochlea to mind. At each stage, problems of impedance matching exist, purely acoustic matchings as-well-as matchings to accumulated weights in networks.<p>
Addressing the first impedance problem is not too far removed from the audiophile problem of designing a home theater. How can one create a system (source, room, receiver, location in the room) where place (here) is replaced by the feeling of a concert hall, of <i>being there</i>. Our ears are not anechoic chambers, they have very poor acoustic isolation, and I very much empathize with those that find themselves confined to live in headphones. It must be very disorienting indeed. Much of what we hear (and thus learned to calibrate with) is influenced by the effect of sound on the body, the temporal bone, sound through soft membranes, etc... The hearing aid is likely not accounting for these differences in phase, those come to the cochlea via an <i>other</i> means. There is the problem of the speaker size, that a small speaker is limited in its capacity to <i>push bass</i>. Even when I walk, with noise-canceling headphones, there is the mixed information of receiving bass through my body (trains, trucks, and ambient hum) versus the bass provided by the music I am listening to. I suspect the body trained to calibrate this *additional* information in ways that a hearing aid (in its form today) simply cannot meet.<p>
As to the second impedance problem, if something general can be said of neurological learning, what are the preferred modes of interpretation that neurology tends to? For instance, in the limit wavelets and Fourier transforms[λ] ought to be the same (I suspect), but both in practice (in implementation) are very different and there might require a more careful analysis of the biology itself before the question is settled. There are questions of affordances, questions of computational trade-offs, capacities to work with what is given. A few years ago, Nick sent me a paper on <a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" target="_top" rel="nofollow" link="external">locality-sensitive-hashing</a> where researchers attempted to derive useful <a href="https://science.sciencemag.org/content/358/6364/793" target="_top" rel="nofollow" link="external">hashing algorithms</a> given the limited resources available to the olfactory system of fruit flies. I suspect that any fruitful determination of function (Fourier transform, wavelet, something else entirely...) must involve an investigation of human capacity, of affordance.<p>
[λ] I don't know much about wavelets, and while I know some about Fourier transforms there is much I very much do not know. In this <a href="https://www.youtube.com/watch?v=AnkinNVPjyw&ab_channel=TheAbelPrize" target="_top" rel="nofollow" link="external">accessible lecture by Terrence Tao</a>, he presents the work of Yves Meyer on Wavelets and provides some insight into how this technique can be very different in content than the technique of Fourier.
<br/><hr align="left" width="300" />
Sent from the <a href="http://friam.471366.n2.nabble.com/">Friam mailing list archive</a> at Nabble.com.<br/>