Order from us for quality, customized work in due time of your choice.
Introduction
The idea of human-machine communication has always been of interest to mankind. The recent technological developments prove that the use of specific low-end systems and EEG can allow recognizing unspoken speech and thus make direct communication without actually uttering words possible. In this respect, the impact of personal peculiarities of experiment participants upon EEG unspoken speech recognition is of special interest.
Literature Review
Naturally, scholars have paid considerable attention to the topic of EEG (Electroencephalography) use for unspoken speech recognition. The first attempt to connect brain waves and speech through EEG are made by Suppes et al. (1997; 1998; 1999), who carry out research allowing to draw parallels between brain wave qualities and words, sentences, or simple images processed by brains (pp. 14967; 15863; 14659). The further step in scholarly research on the topic is the classification of EEG signals carried out by Stastny et al. (2000, p. 2), which allows further investigation of the subject. Up to that point, brain signals were considered in the time domain, while Roscher (2001) moves the discussion into the frequency domain and widens the scholarly scope through this idea.
Binsted et al. (2003, p. 2) consider the sub-auditory speech recognition through electromyogram (EMG) and stress the need for additional research in this direction, while Jorgensen et al. (2001, p. 3) support the topic by laboratory research that presents 92% rate of sub-auditory word recognition. The work by Wester (2006) is considered to be a milestone piece in the study of EEG unspoken speech recognition as it confirms the hypothesis that the latter is possible. However, DZmura et al. (2009) and Porbadnigk et al. (2009) seemingly manage to prove that Westers EEG speech recognition success was based on speech artifacts rather than on recognition of actual words.
Research Relevance
Thus, the literature review reveals that EEG unspoken speech recognition has received considerable scholarly attention, but the point of special interest for the proposed paper, i. e. the effect of personal experiment participants peculiarities on recognition rates, seems to be under-considered by scholars. At the same time, when research deals with human beings, such personal peculiarities cannot be ignored. Drawing from this, the relevance of the proposed paper lies in filling in this gap of prior research.
Research Questions
Accordingly, the research questions of the proposed paper are as follows:
-
Main Research Question: Do personal peculiarities (health conditions, age, sex) of participants of EEG unspoken speech recognition experiments affect the recognition rates?
-
-
Sub question 1: How can this effect, if it is present, be identified and researched?
-
Sub question 2: How can this effect, if it is present, be minimized or eliminated for a more objective picturing of EEG unspoken speech recognition rates?
-
Methodology
To answer the above questions, the proposed research will use a quantitative methodology. In particular, six steps will be involved:
-
Collect data on EEG recognition rates obtained by prior scholars;
-
Select a sample of participants of different health conditions, age groups, and sexes;
-
Carry out a laboratory experiment using a low-end system and EEG methodology;
-
Collect and compare results for different groups mentioned;
-
Contrast the results with prior research data;
-
Make conclusions and answer research questions.
Conclusion
So, the proposed paper focuses on a relevant point in the topic of EEG unspoken speech recognition with the help of a low-end system. The literature review supports the importance and actuality of the topic, while specific research questions and proper methodology are expected to make the process of research fast and clear.
Works Cited
Binsted, Kim et al. Sub-Auditory Speech Recognition. Code IC, NASA Ames Research Center (2003): 1 5.
DZmura, Michael et al. Toward EEG Sensing of Imagined Speech in Jacko, J.A. (Ed.): Human-Computer Interaction, Part I, HCII 2009, LNCS 5610, pp. 4048. Springer-Verlag Berlin Heidelberg, 2009.
Jorgensen, Chuck et al. Sub Auditory Speech Recognition Based on EMG/EPG Signals. Computational Sciences Division, NASA Ames Research Center (2003): 1 6.
Porbadnigk, Anne et al. EEG-Based Speech Recognition: Impact of Temporal Effects. Cognitive Systems Lab 114 (2009): 1 6.
Roscher, G. Real-time Recognition of Noisy Signals from Signal to Knowledge. Soft Computing (2001): 1.
Stastny, J. et al. EEG Signal Classification. Charles University in Prague (2000): 1 5.
Suppes, Patrick et al. Brain wave recognition of sentences. National Academy of Sciences 95 (1998): 15861 15866.
Suppes, Patrick et al. Brain wave recognition of words. National Academy of Sciences 94 (1997): 14965 14969.
Suppes, Patrick et al. Invariance of brain-wave representations of simple visual images and their names. National Academy of Sciences 96.25 (1999): 14658 14663.
Wester, Marek. Speech Recognition Based On Electroencephalography. Diplomarbeit (2006), 1 83.
Order from us for quality, customized work in due time of your choice.