Last month we eagerly welcomed back the AES International Convention in Europe, in the wonderful city that is Berlin. To me it was a return to Berlin after traveling to the city a month before for the International Games week.
Incidentally my last AES convention was also in Berlin while I was still studying in this city. I find the AES an exciting conference showcasing latest technologies, insightful presentations, paper lectures and in general a great place to be at to meet people from our industry. This year it was even more exciting for me as I presented my first technical paper at the AES for the first time.
The paper was extracted from my bachelor’s thesis research results that I did back in the ending months of 2016 with the virtual acoustics group at the Fraunhofer Institute for Digital Media Technology (IDMT). My thesis research dealt with comparing room simulations within a standard 5.0 channel-based surround system and an object-based multi-channel system developed by the IDMT known as the Spatial Sound Wave (SSW) (based on Wave Field Synthesis (WFS) theory and additional perceptual algorithms). To give a very brief overview of object-based/scene-based audio and the SSW, this method of audio production deals with rendering audio sources (recorded or synthesized) as virtual sources in a pre-defined virtual environment. As a result, mixing in such an environment becomes autonomous of the loudspeaker setup. Furthermore, this technology theoretically provides a more accurate representation of the virtual source wave front (based on WFS theories).
Back to my research, the methodology involved re-producing simulated reverberant soundfields within both setups (making use of the same loudspeaker layout) using a Lexicon 960L digital reverberation unit. Impluse response (IR) measurements were then taken for a selection of several different parameter settings from the 960L at the several pre-defined locations within the circular array of loudspeakers. For the 5.0 setup, the 5 discrete outputs from the 960L were directly routed to the loudspeaker setup according to the ITU-775 recommendation. In the object-based setup, the discrete outputs from the 960L were routed to the SSW system and rendered as virtual sources, which were then placed at approximately 1 meter behind the loudspeakers used in the channel-based setup.
More details can be found in the paper itself which can be downloaded from the following link:
A hot topic at the AES 142 was spatial and 3D audio, with importance on ambisonics, including recording, mixing and delivery techniques for these relatively new formats (mostly for VR applications). MPEG-H was in fact a big topic as a new delivery format as was binaural audio and the use of HRTFs for tailored end user experiences.
I also attended the first technical committee (TC) for game audio at the AES142 which included representatives from Dolby, Native Instruments and several universities among others. Guys like Nuno Fonseca (with his Sound Particles software), who was also present are providing interesting tools for sound designers to create spatial audio for interactive media.
One point that definitely came through most of the lectures and presentations is that technology is currently rushing forward (as is most of the time) which has also brought affordable ambisonic microphones in the market, but there is a lot of confusion with regards to delivery formats and methodologies, including for capturing and mixing for VR applications.
Another presentation which I enjoyed was ‘The Ins and Outs of microphones’ by John Willett which refreshed some fundamentals on microphone technology. The poster sessions also had some interesting topics on discussion, including motion tracking for hand gestures, soundscape recording techniques, and others.
The exhibit area had some mouth (or ear?) watering technologies, including the NEVE DFC 3D console; the integration of Dolby Atmos within Pro Tools (which hopefully also comes to other DAWs); the new professional-grade tape from ‘Recording the Masters (RTM)’ which acquired the technologies from BASF (resurgence of affordable tape recording?), among others.
My only regret from this conference is that there were more presentations that I wanted to attend but I couldn’t due to clashes with other presentations, such as the lecture from David Griesinger. However I was glad to get a personal word with Mr Griesinger himself on his new publication.
What’s certain is that we are currently living in an exciting period for audio technology in general and audio in immersive applications. It is interesting to see many people and companies/institutes working in the 3D audio field which will push the technology and methodologies forward.
Did you also attend the AES 142 Convention? If yes, what was your experience?