We Walk Together – soundwalk (2020)

5 soundwalks, 5 cities, spatialized in 3rd order Ambisonics.

Authors: Rui Chaves, Eduardo Patrício, Laura Romero, Lílian Nakao Nakahodo, Luz Camara.
Score: Rui Chaves
Mixing, editing and spatialization: Eduardo Patrício
25’00’’ | Mixed-media

Published on Vortex Music Journal

http://periodicos.unespar.edu.br/index.php/vortex/article/view/4521

We walk together (2020) is thought as a field-recording score that asks would-be performers to think sonically about a place, to produce sound within that place, but also to reflect on the changes caused by the pandemic both at an environmental and subjective level. This collaborative endeavor is a continuation of my exploratory work in novel ways of curating sound artwork outside institutional frameworks. The resulting work, using a visual reference, is a multi-layered and multi-linguistic ‘sequence shot’: an audio piece that straddles the line between fiction, documentary and musical composition.

The sonic essay is a collaboration between a series of artists located in different parts of the globe: Rui Chaves (Loulé, Portugal, 25/06/2020), Eduardo Patrício (Koziegłowy, Poland, 17/06/2020), Laura Romero (Valencia, Spain, 06/06/2020), Lilian Nakao Nakahodo (Curitiba, Brazil, 05/07/2020), Luz da Camara (Evoramonte, Portugal, 19/06/2020).

Toward live recorded 6DoF audio – Unity and ZM-1 microphone arrays (2018)

6DoF (six-degrees-of-freedom) refers to the possibility of a rigid body (or a representational device of it) to move freely in three transitional and three rotational degrees.

Designing sound for immersive VR 6DoF experiences is usually done by using mono or stereo sound artificially spatialised through software processing the seeks to replicate reverberation, diffusion and other acoustic behaviors.

Now, doing a live sound recording, in a physical acoustic space so it can be used in the context of 6DoF VR is said to be impossible or, at least, very hard.

To try and achieve just that, I used simultaneous recordings made with 9 ZYLIA ZM-1 microphone arrays, placing multiple ambisonics sound files in a scene I built in Unity 3D. The sound sound sources in those recordings are physical objects.

The following video explains the process, step by step:

Related paper presented at the 146th AES Convention in Dublin (2019):
http://www.aes.org/e-lib/browse.cfm?elib=20274

(More information and reference to previous recordings can be found on this blog post from ZYLIA.)

The initial proof-of-concept was recently presented at Game Sound Con in Los Angeles, LA and AES Convention in New York, NY – USA.

Los Angeles, LA and AES Convention in New York, NY – USA.

Ambisonics and binaural audio for 360-degree and ‘tiny planet’ videos (2018)

360-degree video


‘Tiny planet’ video

Here we have 2 videos that share the same source material (audio and video recordings), but that had different post-processing workflows, resulting in:

  • A 360-degree video with 1st order Ambisonics audio, and
  • A ‘tiny planet’ video with binaural audio.

(Actual videos bellow)

I prepared the videos (concept, audio / video recording and editing) for ZYLIA and they were published on their YouTube channel.

The recordings were made at Barigui Park, Curitiba (Brazil). For some more details, you can check this short tutorial blog post.

Up ↑