Rerurn to Romy the Cat's Site


In the Forum: Playback Listening
In the Thread: Time Alignment : Live Performances vs Audio
Post Subject: More notches in the repository of hipotises.Posted by Romy the Cat on: 3/7/2007

Well, we do suffer from time misalignment during live concerts. I was a few times at the first rows in extreme left and the entire sound was practically unlistenable as the orchestra never sound synchronized.  If was bad but it would be even more horrible if it was recording.  In the recordings it would be different, depends of the recording techniques ere used, amount of tracks/microphones, their positioning and the way everything is mixed. Sure, in case of two microphones properly positioned (and I personally do not know what would constitute properly positioned as I have zero experience in it) the arrival would be more or less equalized and it is what “better” sound engineers do…..

Still, the biggest question that I see in all of it is: why time discrepancies in “live” sound are more forgiving then the time discrepancies during sound reproduction phase. I can go into discussion about monaural filtering, distance localization, Doppler effects, judging distances with reflective context and about  the effect of overriding of one sources by other sources… however I think that that the answer might be simpler and form a slightly different dimension.

Live Sound and Reproduced Sound are NOT the same entities.  Live Sound (as a sequence of pressure waves), no mater how “wrong” it is, is Sound that out hearing is accustomed to deal with and out brain knows how to interpret or even to filter-out the problems with Live Sound’s. The Reproduced Sound is a not Sound itself, not the primary Reality, but an interpretation of Live Sound via language of mathematical algorithms (transverse waves). Human hearing is absolutely disabled to operate in the environment of transverse waves and therefore we need mechanisms of conversion from longitude waves to transverse and then back to longitude. In the language of longitude waves distance is a Reality but in the language of transverse waves distance does not exists and described ONLY at a calculateable equation. However, any equation is just approximation to reality, successful ONLY in context of a given hierarchical coordinate system. So, only by the means of conversion of Live Sound into Sound Reproduction the already are “loosing” timing as a humanly immediately referenced ingredient. It so, then brain has less capacity to comprehend time discrepancies in Sound Reproduction then it does in Live Sound.  Add to it the harmonic distortions of Sound Reproduction that screw up distance localization and along with time delays misrepresent everything and it would be understandable that the time alignment in Live and Reproduced Sound are very different.

The best evidence that time alignment in Live and Reproduced Sound are very different is the fact that misalignment in Live Sound affects volume very minor. However, in Reproduced Sound any misalignment is not only highly affects volume but also it highly affects ability for listeners to DISCRIMINATE VOLUMES. I do not even mention imaging that got absolutely destroyed by time misalignment….

So, to conclude what I proposed I would like to point out again that in Live Sound we do not deal with alignment but rather with “management of events arrivals”. In Reproduced Sound we deal with equating of mathematical derivatives of multiple functions. What is difference between “live event” and reproduction of “derivatives of functions”? The difference is that the first is random but the second is hieratical. A human awareness is alien to hierarchal perception and therefore any, even minute, discrepancies in hierarchal description of Realty our mind recognizes as a “huge error”,  the errors that our awareness begin to interpret, compare, analyze, correct… in other words …waste our “CPU time” while we should be wholly dedicated to music listening.

Rgs,
Romy the Cat

Rerurn to Romy the Cat's Site