This project corresponds to my undergraduate thesis entitle Concurrent Composition and Performance of Musical Melodies in Real-Time based on Human Emotions by using Artificial Intelligence Techniques, which was develop in collaboration with my partner Efraín Astudillo, and supervised by our thesis director Dr. Enrique Peláez.

My contribution in this project involved the design of the main architecture for this proposal and the implementation of a Markov Chains solution and Fuzzy Logic strategy to generate, classify, and play melodies in real-time while a musician performs chords according to his or her emotional intention. The architecture below depicts the blocks implemented in a system to test the matching between the emotional intention of musicians regarding the emotional perception from an audience.

Architecture for Human-Machine Improvisation

Prototype.

Testing the prototype in a Human-Machine Musical Improvisation

A prototype based on the architecture shown above was develop by using a Windows computer and a MIDI controller as hardware equipment. The software that managed the strategy (MIDI handle, Markov Chains method, and Fuzzy Logic strategy) was develop in C++ and connected with the sound synthesis engine Supercollider for playing the melodies generated by the machine and the harmony performed by the human musician at the same time.

Several musicians fed the Knowledge Base composed by matrices from the Markov Chains process, and melody patterns obtained from those matrices and labeled with emotions according to the intentions of the musicians in a fuzzification process.

A defuzzification strategy is used to playback the right melodies in real-time in regard to chords and new musical material improvised by musicians.

Results.

Fifteen musical pieces were produced by a musician who used the system as a music partner and defined several emotions with their corresponding intensities. Those pieces were sent to average people who were ask to define their emotional perception regarding the tracks, as well as their intensity in order to verify whether the composer emotional intention matched with the emotional perception from listeners. It was found congruence between the composer and the audience, also there were additional emotions different from those defined by the musician. The pieces can be appreciated below. For more details, please refer to the Publications section.

I presented two papers related with this work. The first one at the XI Jornadas Iberoamericanas de Ingeniería de Software e Ingeniería del Conocimiento 2015 in Riobamba-Ecuador, and the other one at the Latin-America Congress on Computational Intelligence (LA-CCI 2015) in Curitiba-Brazil.

Publications