Saturday, October 25, 2008

Daito Manabe - electric stimulus to face -test3


(daito)

music, programming and video by Daito Manabe
some explanation in English

via Synthtopia

4 comments:

Alan Evil said...

How long until we can make music by translating directly from brainwaves? I can remember "hearing" music in my head in certain heightened conditions and wishing there was a way to actually translate what was obviously an audio "signal" in the brain to sound.

John M. said...

We can translate brainwaves into music by using the relative values of their fluctuations to trigger some sort of music making device. People are working on that now.

It will be a completely different technology that allows us to actuate the music that plays in our heads. BIG difference there. That's what I'm really looking forward to.

The skill sets for both of these technologies will be different to. Brainwave synthesis will be just like learning a musical instrument, learning scales etc. The music in our heads technology would require a more passive process, along the lines of "clearing one's head" so the music can come through.

Michael said...

This type of "interface" is already being systematically explored by Arthur Elsenaar since the mid-nineties. Take a look here http://artifacial.org/arthur_and_the_solenoids and here at the MIT Media Lab in 1996: http://wearcam.org/previous_experiences/arthur_elsenaar/ Arthur's performance is hilariously nerdy in that he idles through 1048 facial expressions in 20 minutes ... which i think is about the duration the audience can bare.

John M. said...

Hi Michael.

Thanks for the links. I was unaware of this project. Good stuff.

The brainwave interface idea, discussed in previous comments, has been around since the 1960s, maybe earlier.

see:

http://un-certaintimes.blogspot.com/2008/06/music-for-solo-performer.html