IN PROTEST – VR film series (2020)

IN PROTEST is a project from GRX Immersive Labs and is being produced by a core team of Black immersive storytellers and creative professionals led by Alton Glass and Adam Davis-McGee.

My roles: sound design; spatial / Ambisonics mix.

The first 7 episodes are already available on Oculus TV: “First Response | IN PROTEST: Minneapolis & St. Paul”

Behind the scenes interview: https://www.oculus.com/blog/behind-the-scenes-of-in-protest-with-alton-glass-and-adam-davis-mcgee/

We Walk Together – soundwalk (2020)

5 soundwalks, 5 cities, spatialized in 3rd order Ambisonics.

Authors: Rui Chaves, Eduardo Patrício, Laura Romero, Lílian Nakao Nakahodo, Luz Camara.
Score: Rui Chaves
Mixing, editing and spatialization: Eduardo Patrício
25’00’’ | Mixed-media

Published on Vortex Music Journal

http://periodicos.unespar.edu.br/index.php/vortex/article/view/4521

We walk together (2020) is thought as a field-recording score that asks would-be performers to think sonically about a place, to produce sound within that place, but also to reflect on the changes caused by the pandemic both at an environmental and subjective level. This collaborative endeavor is a continuation of my exploratory work in novel ways of curating sound artwork outside institutional frameworks. The resulting work, using a visual reference, is a multi-layered and multi-linguistic ‘sequence shot’: an audio piece that straddles the line between fiction, documentary and musical composition.

The sonic essay is a collaboration between a series of artists located in different parts of the globe: Rui Chaves (Loulé, Portugal, 25/06/2020), Eduardo Patrício (Koziegłowy, Poland, 17/06/2020), Laura Romero (Valencia, Spain, 06/06/2020), Lilian Nakao Nakahodo (Curitiba, Brazil, 05/07/2020), Luz da Camara (Evoramonte, Portugal, 19/06/2020).

Toward live recorded 6DoF audio – Unity and ZM-1 microphone arrays (2018)

6DoF (six-degrees-of-freedom) refers to the possibility of a rigid body (or a representational device of it) to move freely in three transitional and three rotational degrees.

Designing sound for immersive VR 6DoF experiences is usually done by using mono or stereo sound artificially spatialised through software processing the seeks to replicate reverberation, diffusion and other acoustic behaviors.

Now, doing a live sound recording, in a physical acoustic space so it can be used in the context of 6DoF VR is said to be impossible or, at least, very hard.

To try and achieve just that, I used simultaneous recordings made with 9 ZYLIA ZM-1 microphone arrays, placing multiple ambisonics sound files in a scene I built in Unity 3D. The sound sound sources in those recordings are physical objects.

The following video explains the process, step by step:

Related paper presented at the 146th AES Convention in Dublin (2019):
http://www.aes.org/e-lib/browse.cfm?elib=20274

(More information and reference to previous recordings can be found on this blog post from ZYLIA.)

The initial proof-of-concept was recently presented at Game Sound Con in Los Angeles, LA and AES Convention in New York, NY – USA.

Los Angeles, LA and AES Convention in New York, NY – USA.

Ambisonics and binaural audio for 360-degree and ‘tiny planet’ videos (2018)

360-degree video


‘Tiny planet’ video

Here we have 2 videos that share the same source material (audio and video recordings), but that had different post-processing workflows, resulting in:

  • A 360-degree video with 1st order Ambisonics audio, and
  • A ‘tiny planet’ video with binaural audio.

(Actual videos bellow)

I prepared the videos (concept, audio / video recording and editing) for ZYLIA and they were published on their YouTube channel.

The recordings were made at Barigui Park, Curitiba (Brazil). For some more details, you can check this short tutorial blog post.

PhD thesis – Spatial referentiality and openness: a portfolio of environmental based compositions (2016)

This is the result of my practice based research at Queen’s University Belfast (more specifically, at SARC), supervised by Dr. Pedro Rebelo and Dr. Paul Stapleton (2nd supervisor).
Here, you can find the full portfolio commentary in PDF, related posts and video documentation for each piece.

Abstract

Through a creative portfolio and an analytical and critical commentary, this research investigates the use of spatial references in the composition of semi-open environmental sound works. The portfolio explores a number of strategies to make use of spatial references as formal compositional components to enable more intuitive performance/reading experiences. The pieces present a number of electronically mediated scenarios in varied formats; concert, installation and mobile application. Counting on the intuitive way one tries to constantly identify surrounding spaces, each piece uses physical (performance/presentation spaces) and representational devices (illustrations, maps, video projections, spatialised sound etc.) to articulate and delimitate semi-open artistic experiences. Such ambiguous scenarios are enabled by both the unpredictability of elements of each work and the dependence on the subjective interpretations of the agents involved in the process. The creative processes presented here in a descriptive, analytical and critical manner attempt to make an artistic contribution and provide documental material for future reflection about related practices.

Download: commentary_PhD_thesis_Patricio_2016

Related posts: No Chords Attached; Come Across; Lock 1 memories; Sienkiewicz Pipes; Up the Hill; A Blue Bridge

Videos of the portfolio pieces:

No Chords attached (short version)

No Chords attached (full concert)

Come Across (short version)

Come Across (full concert)

Lock 1 Memories

Sienkiewicz Pipes

Up the Hill

A Blue Bridge (short version)

A Blue Bridge (full concert)

A blue bridge (2015)

A Blue Bridge is a soundwalk based audio-visual piece for 16 channels. The piece presents readings of a particular urban space in the city of Belfast through an interactive performance that recreates a soundwalk, juxtaposing and superimposing layers of sound, light and memories. It proposes a semi-open concert situation, conducted by a performer that leads the audience through a reconstructed soundwalk composed of fragments of several recordings of the same route, crossing a footbridge over the river Lagan (Belfast-UK).

A custom Max MSP patch manages 1 video file and 16 audio channels divided in two layers: 8 fixed (“real” sounds layer) and 8 manipulated and processed in real time (“imagination” layer). The fixed layer contains several stereo recordings of the walks done in different times of the day, edited to create a number of sub-layers. This approach allowed me to assemble a number of complex sonic environments that resemble the original ones, but expanding them from stereo to an 8-channel surround scenario.

During the performance, these 8 channels are played back synchronously with the video. In terms of content, this first layer presents manly background, ambient sounds, without too many distinct sonic events being heard.

The second audio layer (“imagination”) contains heavily edited recordings, presenting mostly foreground sounds that can be manipulated and brought to the foreground by moving around a wirelesses control sphere (sending OSC messages to Max MSP).

In the following performance excerpts, recorded in the Sonic Lab at Queen’s University, the 16 channels were mapped onto 32 channels with speakers in 3 levels: bellow, above and at the audience level.

Full video from Sonorities Festival 2015 (Queen’s University Belfast, UK):

Up ↑