Order from us for quality, customized work in due time of your choice.
The Robots Rebellion: Finding Meaning in the Age of Darwin by Keith E. Stanovich expands the neo-Darwinian theory most famously introduced in the book The Selfish Gene by Richard Dawkins. Stanovich implies that humanity is primarily driven by the relatively simple yet overwhelmingly powerful desire to replicate. This desire is of evolutionary origin and is secured with the autonomous set of systems (TASS). TASS is providing the responses to such stimuli as hunger or sex stimulation, as well as more complex behavioral models, such as basic moral decisions and preferences. However, humans (defined as vehicles for their primary function of conveying the information intended to be preserved) have the set of instructions of their own, which, Stanovich argues, is implied by the complexity of the system. Such effect is defined as the long leash, as opposed to the short leash of simpler vehicles. The long leash allows for relatively free interpretation of TASS to provide a more adaptable vehicle. And it is this long leash that sets humans apart from other complex vehicles. According to Stanovich, humans differ from the other vehicles because they comprehend the existence of TASS, its principles and the ways it alternates their behavior. This gives them the unprecedented opportunity to distinguish the nature of the preference or a decision and regulate the resulting behavior. In other words, humans should rebel against TASS by pursuing their own agenda. Stanovich argues that such opportunity should be taken advantage of whenever such rebellion is beneficial for vehicles (e.g. eating less fat and sugar (base example)). However, he readily admits there is no definitive way of organizing such analysis.
The conflict between rationalization and emotion-driven morality can be illustrated through an example presented by Jonathan Haidt. Haidt describes an experiment, in which people are presented with a description of an offensive yet harmless action, like eating a chicken carcass after having intercourse with it, and asked if it is an acceptable thing to do. Most of the time people immediately respond that the described action is immoral but cannot provide a valid reason for that, sometimes plainly stating they feel it is wrong. This effect, Haidt argues, is observed because the reaction is initially defined by emotions and feelings (e.g. disgust) rather than logic, and the reasoning is a subconscious effort to justify the behavior. (Haidt, 2001). Haidt also notes, that when limited in time, subjects are more likely to choose the emotion-driven answer.
This phenomenon can be used to manipulate the behavior, as is shown in another experiment by Haidt and Wheatley. The experiment shows that inducing disgust can alter the individuals reaction to an unrelated event, resulting in delivering stricter judgment (Wheatley & Haidt, 2005).
An even more vivid example of behavior alteration is a variation of a thought experiment known as the trolley problem:
In the initial variant of the description, a man is standing near the railroad track when he sees a runaway train. He can see that five people are working on tracks further down the path of the train. He can save them by switching tracks and redirecting the train to the other track, where only one person can be seen. Doing so, he will save five by sacrificing one. Should he do it?
In most cases, people tend to answer positively, thus exhibiting the adherence to utilitarianism. The logical outcome here is that the death of one is better than the death of five. However, this response can be altered by changing the setting of the story. In the second variant, a surgeon is in the middle of operating five people, each of whom is in desperate need of a certain organ transplant. At the exact same moment, the sixth patient comes in for a regular check. Coincidentally, he is a perfect match for all five people. Should the doctor kill this sixth patient to save other five?
Interestingly, while the premise is almost identical, the answers differ drastically, with most people describing it as killing and thus an unacceptable outcome (Thomson, 1985; Crockett, 2013). This can be described as a deontological approach, which appeals to universal moral code rather than taking into account the benefit. What makes people answer differently, according to Jim Davies of Carleton University, is the presence of a direct action which contributes to the death of a person versus interacting with a mechanism, which leads to a persons death. Davies suggests that along the course of evolution humans were imprinted with the rejection of direct killing (as a means of preserving the tribe), but that the indirect killing is a recent enough phenomenon to be left unrecognized by genetic moral code. Thus, people reason more clearly in such cases and tend to make decisions in accordance with the concept of the common good, rather than the feeling of righteousness. This assumption is backed up by the series of experiments conducted by Joshua Greene, the co-author of the dual-process theory. Greene proposed that human brain has moral subsystems that interact while solving complex cognitive problems. The research showed the correlation between the complexity of the decision being made and the activity of certain parts of the brain (Greene, 2001). Greene also points out at the inhibiting effect the cognitive load has on utilitarian model of thinking (Greene et al., 2008).
The trolley problem is one of the best researched and documented cases of rebellion proposed by Stanovich. It helps us better understand the mechanisms of reasoning, which is characteristic of humans. What is more important, it gives us a valid instrument of augmenting our reasoning, albeit in the earliest and most undeveloped form. Stanovich states that we should approach our decisions critically to see which of them serve the purpose of benefiting vehicles, and which are a product of replicators and should thus be ignored. He admits, however, that such analysis is not a transparent process, especially for the untrained mind. He adds that there is no nominal point to base our judgment upon, and that we thus need to keep reassessing our goals. The trolley problem variations show us a possible, albeit partial, solution. Suppose there is a controversial topic that needs to be publicly discussed. Taking into account the knowledge of genetic side of psychology, one can reorganize the statement to avoid possible logical fallacies triggered by replicators (target example). This, of course, raises several other ethical questions, such as the possible reversal of the aforementioned process. In fact, the effect described by Davies can be recognized in several forms which are already firmly instilled in our culture (e.g. means of diffusion of responsibility in capital punishment execution). However, we should understand that the neo-Darwinian ideas, such as exhibited in Stanovichs book, improve our understanding of nature of morals and thus require changes in the approach to established moral norms.
References
Crockett, M. J. (2013). Models of morality. Trends in cognitive sciences, 17(8), 363-366.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 21052108.
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144-1154.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814.
Thomson, J. (1985). The Trolley Problem. Yale Law Journal, 94, 13951415.
Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological science, 16(10), 780-784.
Order from us for quality, customized work in due time of your choice.