Order from us for quality, customized work in due time of your choice.
Introduction
This paper investigates current trends in turning the brain waves into music, through surveying the literature on the methods of turning Electroencephalography (EEG) waves into Musical Instrument Digital Interface (MIDI) music. The effects of the music on the human body are well known and documented, where different types of music can have a beneficial effect on peoples mood and psychological state. The trend of somewhat a reversed operation acquired an interest recently, where it was used in therapy purposes, and called neurofeedback. The proposed research aims at providing a simple and portable system that will be able to generate MIDI output based on the inputted data collected through EEG collecting device. Yet, at first, it is vital to formulate the major research and identify both practical and theoretical implications of this study.
Rationale for the study
The key task and research questions
The main issue that should be addressed is how to translate Electroencephalography waves into MIDI output. However, to cope with this task, we need to answer several sub-questions. They are as follows:
-
How to correlate EEG waves with quantitative characteristics of the sound (pitch, tempo, rhythm etc)? The main hypothesis is that alpha beta and theta frequency brands can correspond to different notes.
-
What methods of monitoring and analysis should be employed to transform brain waves into music, namely: 1) power spectrum, 2) Hjorth. 3) Discrete Fourier Transform and so forth? It may be prudent to use them in turn and compare the data, collected in this or that way.
-
Which technologies are most suitable for recording and interpretation of brainwaves? Due to financial restrictions, we intend to use Pendant EEG which is believed to be most portable and least expensive EEG unit.
-
How to position electrodes on the cranium?
-
Which parts of the brain are directly responsible for the creation of music?. At this point, we can say that both hemispheres are involved in this process. Some scholars think that the right hemisphere deals with special elements like pitch whereas the left is responsible for the structure and progress of the melody (Heslet, 2).
-
What are the major stages in the process of sonification of neural signals?. This process will consist of the following stages: 1) data acquisition 2) data preprocessing: 3) intermediate representation (the creation of visual and sonic map); 4) visualization and sonification (Brouse et al, 9).
-
What are the impacts of neurofeedback music on the psychological state of the person?. It is believed that neurofeedback music may have therapeutic effects on the patient. Additionally. It can influence his or her emotional state.
Benefits and implications
The study can be beneficial in many ways, where the therapeutic effects of listening to the music created by the brain waves document many cases of treating health problems. Although the methodology of the conversion of brain waves is still in need of clarification and further, listening to the music created by the brain was seen to treat insomnia, and stress, as well helping in increasing concentration focus and energy. (Bob Groves, 2005a, Bob Groves, 2005b, Grant, 2008, Megan and Writer, 2007) Studies also proved the potential of the music as a part of a neurofeedback in improving attention abilities in clinical groups and healthy people. (Gruzelier and Egner, 2005). Additionally, neural music will help people with disabilities in creating an intermediary interface through which they not only will be able to create their music, but also use the musical output as a controlling device. (Steenson, 1998).
Another practical implication of this work is the development of interactive music systems. The study of electroencephalography waves and their translation into sounds can give rise to new forms of musical expression (Livingston & Miranda, 1). The data, derived in this way can be of great assistance to psychologists, who often struggle to evaluate a patients emotional state (Arslan et al, 2). Sonification of bio-signal can become a very useful tool for them. This area of study is of great interest to medical workers, as it enables severely impaired to communicate or even control mechanical tools (Miranda et al, 81). Additionally, the findings can be adopted by the developers of musical applications. Overall, this question is a part of a larger discipline, which is called BCI (Brain-Computer Interactions). Many scientists attempt to work out methods for the manipulation of computers (Miranda & Boskamp, 3). The results acquired in the course of this research can be utilized for the development of BCI systems.
Overall, the belief that EEG can be used to generate sounds is not new. It dates back to early seventies. Since that time, researches have always tried to construct devices which can render these signals into sound. One of the most challenging difficulties as to how to interpret EEG waves. Most importantly, scholars attempt to correlate them with the quantitative characteristics of music: namely, pitch, rhythm, tempo, melody and so forth. The key task is to make the most accurate representation of brainwaves and it has yet to be performed. Numerous attempts have been made, but at this moment, none of the technique as well as technologies can be regarded as infallible.
Another point which requires in-depth examination Is the construction of technologies that can adequately capture EEG waves (Browse et al,3). It has to be admitted that Musical Digital Media Interface has already been adopted by many neuroscientists to monitor brain waves in real time (Hofstadter, 10). Nonetheless, the sophistication of conversion software has enabled researchers to better use EEG signals to control MIDI (Zhang & Miranda, 3). In this respect, it should be pointed out that the positioning and number of electrodes are also of great concern to the researchers as they need to ascertain which parts of brain are directly responsible for creating melody. In this study, we may need to alternate several variants of positioning.
Apart from that, at this point scholars cannot decide which one of EEG analysis is the most effective: there are many of them: 1) power spectrum, 2) event-related potential, Hjorth, 3) spectral centroid and many others (Miranda & Boskamp, 4) Under some circumstances all of them can be used quite efficiently. So, it may be prudent to test them in connection with MIDI.
Thus, it is quite possible to argue that this research has theoretical and practical implication: its findings can be adapted to help people, who can suffer from hearing impairments. The sonification of EEG can be of great value to the developers of musical applications: either technologies or software solutions. On the other hand, this study of this question can be beneficial from theoretical point of view: it may outline the best methods of measuring, analyzing and converting brainwaves.
Methodology and validity of the experiment
The approach taken can be seen through the transformation of the EEG data, which will be collected by an EEG collecting device into a raw data. The approach is influenced by the interface described in the article Brain-Computer music interface for composition and performance by Eduardo Reck Miranda, where different frequency bands will trigger corresponding piano notes through, and the complexity of the signal will represent the tempo of the sound. (Hofstadter, 2009) The correspondence of the sound and the notes will be established through experimental work, where data of participants of a test group will be gathered and analyzed, putting intervals for brain frequencies for different notes.
Naturally, a great number of experiments have already been conducted to investigate EEG signals into music. We may mention the experiment, performed by a group of scientists, headed by Filatriau, who created EED driven audio-visual texture synthesizer (Filatriau et al, 3). This system allows presenting these waves graphically and then converting them into music. The authors have established that the intensity of the sounds is controlled by the level of energy in the alpha, beta and theta frequency bands (Filatriau et al, 3). In part, this study demonstrates that such experiments are quite feasible. Moreover, these findings should be taken into consideration while developing conversion software.
The research by Andrew Brouse et al can be used as a guideline for our experiment. Their work is relevant to this discussion because it marks out the major stages of EEG analysis: namely 1) data acquisition 2) data preprocessing: 3) intermediate representation (the creation of visual and sonic map); 4) visualization and sonification (Brouse et al, 9). This model is crucial for the algorithmization of EEG conversion programs. In other words, these are the key processes which they need to perform.
As it has been previously noted, several methods of analysis can be applied to control MIDI: one of them is DFT (Discrete Fourier Transform). It is necessary to break EEG waves into numerous frequency bands and reveal the distribution of power between them (Miranda & Boscamp, 2). Although its primary application is the monitoring of emotional state of the patient. It is equally reliable in neuroscience of music. It is of the crucial importance to examine the results of Hjorth analysis. This technique enables to evaluate activity, complexity and mobility of EEG wave. This is a powerful instrument for the creation of visual and sonic texture of the signals. We need to alternate these methods to get more comprehensive description of EEG waves. Furthermore, combined use will allow us to determine their effectiveness for this specific task.
The system for studying brainwaves and their sonic representation includes the following components: 1) the sender of EEG signal (in other words, the subject or patient); 2) analytical processor; 3) music engine and MIDI (Hofstadter, 10). This model appears to be most applicable in this situation.
It should be borne in mind that EEG waves are extremely difficult to capture and measure: because the transmitter and the receiver are separated by meninges, cranium and the skull. This is one of the reasons why neuroscientists use many electrodes as they strive to intensify the signal. Yet, in this case, twenty electrodes will suffice. There is another important component of research methodology: it is the sampling of the population. The subjects should be randomly chosen. It is also critical to measure their EEG signals at different times. This approach will enable us to better represent emotional state of the subjects. The observations, made in the course of this work should be compared and contrasted with evidence, accumulated by medical workers and psychologists.
The final stage will be to assess the effects of such music on the subjects. It is often hypothesized by medical workers that it can either evoke positive or negative emotions. Some therapists rely on it while treating patients. Undoubtedly, we will not be able to use it for therapeutic purposes, but we may at least observe peoples reactions to the music generated by their consciousness.
Among the specific devices that can be used for this research is Pendant EEG, which can be considered as the most portable and inexpensive EEG unit on the market today. The device can record and analyze the waves along with the possibility of transferring the data to the computer and subsequently to EEG analysis software. Accordingly, the choice of the MIDI output can be justified by the simplicity of the sounds, where the General MIDI standard 2 is universal across vendors and manufacturers, supports several channels, and can record the characteristics of the sounds such tempo, decay time and vibrato. (Pisano, 2006). This device is critical for collecting and measuring EEG waves. Yet, its beneficence will also depend on the use analysis techniques that have been discussed previously.
Timelines
The research will be divided into several stages, including the preliminary notes assignment, the development of conversion software, and the analysis of the study group, where the results of the composed music will be measured based on several criteria the music perception and the satisfaction level. At this stage, it is hardly possible to speak about the exact scheduling. The primary task would be to develop the system of monitoring, measuring and interpreting brainwaves. Furthermore, we will need to find subjects who would agree to participate in this project. Recording sessions can take approximately one or two weeks. Again each subject must participate in two sessions. The interpretation of data and its translation into music will be the most time-consuming part of this process. Only afterwards we will be able to examine the effects of these sounds on physical and psychological state of the subjects.
Contribution
The study can contribute to the field of the neurofeedback, by providing criteria tools for assessment. Additionally, the methods of notes assignment through average intervals of waves of bandwidth will be tested to establish the similarity between the waves and notes in different groups. Hypothetically, this study can significantly contribute to the knowledge of electroencephalography waves and their musical representation. The findings can be utilized by for the development of BCI systems. Furthermore, the results of the research can be of great use for musical applications. Translation of EEG waves into sounds has grown into one of the most promising and challenging areas of neuroscience. Overall, the proposed research can be a valuable contribution the understanding of the research questions.
Works Cited
Arslan, B., Brouse, A., Castet, J., Lehembre, R., Simon, C., Filatriau, J. J. and Noirhomme, Q. Real Time Music Synthesis Environment Driven with Biological Signals, Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2009.
Brouse, A., Filatriau, J.-J., Gaitanis, K., Lehembre, R., Macq, B., Miranda, E. and Zenon, A.. An instrument of sound and visual creation driven by biological signals. Proceedings of ENTERFACE06. 2006.
Coutinho, E. Miranda, E. R. and Cangelosi, A. Towards a Model for Embodied Emotions, Proceedings of the Workshop on Affective Computing: Towards Affective Intelligent Systems, 2005.
Filatriau, J. J., Lehembre, R., Macq, B., Brouse, A. and Miranda, E. R. From EEG signals to a world of sound and visual textures 2007.
Groves Bob. Brain music therapy tunes into trouble; Waves used to help with ups, downs. The Record, 2005.
Groves Bob. MUSIC FOR YOUR MIND THERAPY TURNS BRAIN WAVES INTO MUSIC TO SOOTHE OR STIMULATE. Pittsburgh Post Gazette.
Grant Alexis. MENTAL HEALTH / Brain may benefit from its music / Tunes composed from patient brain waves may relieve anxiety, insomnia. Houston Chronicle, 2008
Gruzelier J. & Egner, T. Critical validation studies of neurofeedback. Child and Adolescents Psychiatric Clinics of North America, 2005 14 83-104.
Heslet Lars. Our Musical Brain. (n. d.). Web.
Hofstadter, K. Investigate the use of EEG data for sonification and visualisation in a creative environment. Creative Music Technology. 2009, Anglia Ruskin University.
Livingstone, D. and Miranda, E a). ORB3 Musical Robots within an Adaptive Social Composition System, Proceedings of the International Computer Music Conference 2005
Livingstone, D. and Miranda, E. R. Composition for Ubiquitous Responsive Environments, Proceedings of the International Computer Music Conference, Miami, 2004.
Megan, K. & Writer, C. S. SOOTHED BY YOUR BRAIN WAVES; THERAPEUTIC WHEN CONVERTED TO MUSIC. Hartford Courant. STATEWIDE ed. 2009.
Miranda, E R. Brain-Computer music interface for composition and performance, International Journal on Disability and Human Development, 2006 5(2):119-125.
Miranda, E. R. On the Music of Emergent Behaviour: What can Evolutionary Computation Bring to the Musician?, Leonardo 2003, Vol. 36, No. 1, pp. 55-58.
Miranda, E. R. and Boskamp, B. Steering Generative Rules with the EEG: An Approach to Brain-Computer Music Interfacing, Proceedings of Sound and Music Computing, 2005.
Miranda, E. R., Sharman, K., Kilborn, K., Duncan, A. On Harnessing the Electroencephalogram for the Musical Braincap, Computer Music Journal 2003, Vol. 27, No. 2, pp. 80-102.
Pisano, J. MIDI standards, a brief history and explanation. MustTech.net 2006. Web.
Steenson, M. W. Brain Music Points to Mouseless Future 1996. Wired. Web.
Zhang, Q. and Miranda, E. R. Multi-Agent Simulation for Generating Expressive Music Performance, Proceedings of World Congress on Social Simulation, 2008.
Order from us for quality, customized work in due time of your choice.