Innovation

Bridging Human and Instrument Through AI

by Su Aziz

Translating body movements into music using AI is the latest innovation by Yamaha Corporation using, among other things, their Disklavier. This article was first published in the second issue of WIEF In Focus magazine.

Yamaha, Motoichi Tamura believes, ‘AI will become a bridge between humans and musical instruments.’ The general manager of Yamaha Corporation’s Development Department No. 1 of their research and development division, explains how they are continuously innovating. ‘It’s because Yamaha wants to enable people to have freer and more direct expression through musical instruments,’ he adds.

Therefore, 2012 saw research work began on a piano AI system that followed human musical performances. The system was used in a concert in 2016, after which, they started research on the dance to piano performance system. After a number of experimental performances, Yamaha succeeded in a joint performance with world-renowned dancer, Kaiji Moriyama, in November 2017.

Here, Motoichi sheds light onto the process.

Who was responsible for the idea?
Isao Matsushita, vice president of Tokyo University of the Arts, sought Yamaha’s advice on the feasibility of a new type
of concert with advanced technologies. Yamaha proposed a joint performance by a dancer and an ensemble of musicians, both performing as ‘musicians’. Isao decided that the performers should be Kaiji and the Berlin Philharmonic Orchestra, the Scharoun Ensemble.

What were the challenges?
From a technical point of view, the most challenging task was to develop a system that could respond immediately to the dancer’s movements. Unless the system could produce sounds instantly after when the dancer was about to move, the result couldn’t be called a performance.

There are many examples of movement recognition using AI already but producing sounds along with the movements in
real time was difficult. From a musical perspective, the challenge was to create a new form of artistic expression while giving proper regard to the existing Hi Ten Yu composition – created by Isao Matsushita in 1993, was a concerto for Japanese drums and an octet ensemble. However, instead of using motifs from the rhythms of the strong and sonorous Japanese drums in the original composition, the piano was substituted with the drums.

In the resulting rearrangement, the dancer ‘performed’ the piano part while the part of the octet ensemble remained the same. Isao, Kaiji, and Yamaha’s technical staff worked together to draw on the elegant musical expression by using the potential ability which the piano has. Thus, the composition became a new Mai Hi Ten Yu – in Japanese, mai means ‘dance’.

What’s the biggest lesson learned?
We learned the methods for integrating systems with a level of sophistication that enabled a world-renowned dancer to use it as a tool for expressing his art. Previously, we had attempted to transform the movements of gymnasts – floor exercise and balance beam performers – and badminton players into music. However, these experiments were limited to transforming a portion of the body movements into piano phrases using sensors.

The biggest theme we challenged this time was how to transform the beautiful movements of the dancer’s whole body
into sounds and make this available to be enjoyed as a form of artistic expression. To make it possible, as a first step, we observed the body movements of Kaiji. Then, we classified them and conducted experiments on what sensors we could use to detect these movements.

As a result, we decided to use four types of sensors – namely, those measuring acceleration, gyro, extension/contraction and muscle potential, also the 34 kinds of sensor signals emitted by these four sensor types. We experimented with various methods for attaching the sensors to Kaiji’s body to ensure that, even when he engaged in furious motions, the sensors wouldn’t come loose and the signals would be detected.

In addition, to enable Kaiji to move freely on the stage, we adopted a wireless system. To create a system that could recognise the various types of movement using the signals emitted from the sensors, we let Kaiji himself repeat his basic movements over and over. Then, we used machine- learning methods using this data. Based on the data generated by machine learning, the system was able to recognise the various movements of the dancer instantaneously. Each time, we also spent time to adjust the systems to transform these movements into phrases for the piano.

Can this technology be easily transferred to other types of piano?
Yes. Provided the musical instrument can receive and process data in the MIDI standard music performance telecommunications protocol. This system can be performed not only on acoustic pianos but also on digital pianos as well. For the performance this time, to ensure the best sound in a classical music concert hall, we used a Disklavier CFX, Yamaha’s flagship CFX concert grand piano, especially installed with a player piano function.

In less spacious concert halls, a smaller grand player piano or upright pianos can be used. There are also instances where digital pianos and other digital sound sources are used. The use of the system is not limited to piano sound. Reproduction of the sounds of other musical instruments or recorded audio data of specified music phrases is also possible.

Do the movements control what musical notes are produced?
Basically, yes. More precisely, this system ‘looks for’ an appropriate musical phrase which will best match the dancer’s gestures, from the phrase database we created for this composition.

For example, since movements of the hands and arms are supposed to go well with brilliant sounds, they are linked to relatively high sounds from the piano. To give a masculine expression to go with the movements of the lower half of the dancer’s body, the piano plays relatively lower sounds. Since the relationships between body movements and sound are composition dependent, it’s desirable to prepare phrases tailored to individual compositions.

Does the piano follow the dancer or vice versa?
Both. Sounds are generated by the dancer’s movements and then, these sounds have an influence and change the dancer’s movements. We didn’t aim to reproduce piano phrases that had been determined in advance. Instead, we aimed for a system that would generate sounds based on the dancer’s inspiration. As the dancer uses the system and practices movements over and over, he gains the ability to create musical expressions of his own will.

Why do we need to link people to musical instruments?
At present, the body movements needed to perform music differ from one instrument to another. It’s necessary to practice movements over and over. Moreover, each movement’s only for a specific musical instrument. It means that it isn’t possible to create other artistic expressions by the movements.

On the other hand, in dancing, gymnastics, figure skating and other activities, it’s possible to move the body along with music but the performer can’t control the music. If the performer can eliminate this limitation, and if he can manipulate the image and ideas of his own expression with both movements and sound at will, we believe it’ll be possible for performers to express themselves more intuitively and persuasively.

What is next for this technology?
We want to reduce the ‘distance’ between people and musical instruments and develop systems that enable many people to express themselves through music freely

___________________

For more on the latest topics related to business, technology, finance and more, read our digital versions of In Focus magazine, issue 1,issue 2 and issue 3.

12 Feb 2019
Last modified: 14 Feb 2020
share this article