This personal project is a work in progress that explores a granular synthesis technique constrained to musical harmony theory that uses fuzzy logic to playback small portions of sound called grains. The algorithm distributes the grains in a two-dimensional space according to a musical structure shaped by a chord progression. Grains are playback in a timed sequence adjusted to a rhythmical pattern sorted by a fuzzy logic method that considers the energy and frequency of each grain across a defined harmonic environment. The digital synthesizer built under this approach was called “GSynthSky: Granular Synthesis in the Sky” because of its aesthetic presentation based on the analogy of a night sky full of stars (grains in this case).

GSynthSky Architecture

The organization of grains is based on a chord progression shaped by the circle of fifths, whose representation for seven elements can be played counterclockwise as I-IV-vii°-iii-vi-ii-V-I. For this work, such progression was chosen to increase the musical congruence of the grains taken from the audio samples.

An architecture for a granular synthesizer that supports the proposed strategy, and the musical structure described above, is illustrated in the figure below.

Granular Synthesizer GSynthSky Architecture

The synthesizer randomly takes small portions of audio (grains) from one or more audio samples; then, since all grains inside the musical structure must be near to the chord notes in the progression, the \grain selector transfers the portions, randomly taken, to the grain distributor, which admits or discards a grain according to its signal frequency. This classification process happens once at the starting of the system and creates a cloud of grains, which is the collection of all the admitted audio portions distributed on each chord space, as depicted in the next image.

Grains distributed in a chord progression (F major keynote) in a graphical interface similar to a starry sky

After a cloud of grains has been created, the system runs a real-time algorithm at every tick of a general metronome that controls the timing in BPM (Beats per Minute). At this point a human performer can reproduce the portions of sounds by moving graphically a circular area, smaller than the space occupied by the cloud, as shown in the previous image. The grains inside the pointed area are taken by a grain picker, which adds or removes them from the process for synthesizing new sounds. This set of grains is organized in an ordered list by a grain allocator over a musical bar; at the same time, those grains are ordered in a different fashion according to a fuzzy prioritizer that uses their signal energy and frequency. Both ordered sets are processed by a grain sequencer builder that places the prioritized grains in the positions assigned by the distance method, which considers how far a grain is from the center of the circular area. The sets of organized grains then are played back and produces new musical sounds totally different from the sound sources, along with a specific rhythm given by the sequencer.

The details about the technique are available in a paper presented at LACCI 2019 conference. Please refer to the Publications section.

Software Implementation

A first prototype regarding the architecture described above was implemented in the Unity3D Game Engine platform and take the grains from a personal composition that can be listened below.

“Luz de mi Vida” by Pedro Lucas

The next video shows how the grains are initially distributed and the way in which they are played-back in the prototype. You will note some similar timbre properties from the original source because of the size of grains is 500 ms, which is higher than the typical 50 – 100 ms size. The first 23 seconds have no sound in the video.

GSynthSky prototype in action

Next Steps

The implementation of the strategy will include the manipulation of grain parameters such us pitch, length, and amplitude. Audio effects like filters, reverb, delay, echo, etc, will be part of the synthesis process in order to extend the possibilities of sound manipulations in real-time. Also, a Virtual Reality (VR) version of the granular synthesizer will be implemented for performing music with head movements. Experiments with musicians and non-musicians will be conducted to test the usefulness and expressiveness of the synthesizer.

LA-CCI 2019

I presented a paper related with this project at the 6th IEEE Latin American Conference on Computational Intelligence LACCI 2019. The paper received two awards, the Best Fuzzy and Stochastic Modelling Paper, and the Best LACCI 2019 Paper. Dr. Enrique Peláez supported me in the writing and the advising on the development of the paper as well as in the participation at the conference.

Publications