Chapter 4: ((Why Has “Self” Evolved?))
((Beyond the potential engineering virtues associated with adding sensations to motivate the function of certain varieties of subject, a second auxiliary discussion is also presented here. This second discussion concerns the value of existence which is “tragic.”
The stated “sensations mode of function” raises a question of engineering — exactly how might this dynamic have been useful in the evolution of associated subjects? Observe for example how quickly our machines have advanced, presumably without any ability to experience sensations. This suggests that evolution could have (and indeed must have), constructed very dynamic examples of purely “instinctive life.” If so, however, then why has a separate “operating system” also emerged which effectively causes existence to have a “good/bad” potential? What exactly are the engineering virtues of permitting (and forcing) subjects to experience positive (and negative) sensations?
Though it might be helpful here to understand exactly which subjects contain “self,” and by elimination which are purely “instinctive,” I have only broad presumptions at my disposal. Perhaps subjects that are at least as advanced as fish can experience sensations, while the “simplest,” such as microbes, plants, and fungi, cannot. I currently have few convictions about “the middle,” or essentially where this transition occurs.
For example, in many respects the ant does not clearly suggest that it harbors “self,” which is to say that it experiences positive and/or negative sensations. Because it seems perfectly willing to sacrifice its body for its society, perhaps it’s more similar to “a robot” than something that might experience negative sensations when its body becomes damaged. Nevertheless, if the ant harbors a consciousness from which to experience sensations in any capacity at all, then from the presented model it also has an associated “self dynamic.” With my very limited grasp of ant biology, however, I currently have few convictions in this respect.
I do, however, presume that the least “complex” subjects, such as microbes, plants, and fungi, harbor no personal entity. I base this presumption on the premise that at some point subjects should exist that are so “basic” that they have no consciousness, and thus experience no sensations. From the presented model, “instinct” is the mechanism that underlies the operation of all subjects in our diverse ecosystem — it presumably pumps blood, builds cells, battles infection, and so on. Thus we should expect to find examples that function exclusively through “non-sensation” mechanisms.
So then what might be the engineering virtues of a “punishment/reward,” or “personal relevance” dynamic? If all relatively “advanced” subjects experience sensations, then why might instinct alone have been an insufficient means from which to develop them? My theory here is essentially that a punishment/reward dynamic helped simplify evolution’s “programming demands,” and did so in a way that promotes functionality by addressing “unforeseen challenges.”
Notice that under the sensations mode of function, evolution does not mandate what a given subject must do under a given set of circumstances, while it does essentially need to decide this for purely instinctive varieties. Rather than specify what must be done given associated circumstances, decisions from the personal mode are effectively “subcontracted out” for a subject itself to evaluate — the subject is encouraged to “consciously decide how to proceed” given the perceived punishments and rewards that exist for it. If punishing/rewarding sensations are set up to generally motivate behavior which promotes genetic proliferation, then this type of subject might succeed under various situations that would be difficult to assess/predict, and presumably would be difficult for exclusively instinctive mechanisms to productively address. From the presented model, sensations are the element which “drives the conscious mind,” and apparently consciousness itself can be an effective auxiliary mode of function.
For example, consider the lengthy series of “programming” that a purely instinctive subject should generally possess which specifies how it shall acquire food. Then consider a subject that is very similar in many ways, though this one also has the potential to experience sensations. Observe here that certain instinctive instructions might safely be omitted in the subject that can experience sensations, since this one might “personally” figure out how to get fed to some degree. Thus evolution might denote how “hunger” feels, how various things “taste,” and so on, and this subject will then have motivation from which to essentially “figure out what to do” given its perceptions of the various punishments and rewards that exist for it. Under this dynamic perhaps certain “unforeseen challenges” will then have more potential to be overcome, given that evolution isn’t required to address them specifically. Instead, here there is “a consciousness” from which to sense personal existence, and in this regard the potential for it to “understand” what is happening. Thus perhaps various situations might then be exploited given that existence can be personally relevant — or the subject’s “programming” becomes somewhat “subcontracted out” through personal motivation to figure out how to proceed.
I’m suggesting that if a human were to exist that could not experience sensations, and thus could not be punished or rewarded, then its conscious mind would have no motivation from which to function. This subject would “live,” with hair that grows, flowing blood, and so on, but have no incentive to use its otherwise complete conscious mind in order to figure out what to “do.” This idea is essentially that evolution developed “sensations” in order to motivate subjects so that they would in this manner “program themselves.” And given the advanced nature and proliferation of sensation experiencing subjects, apparently this “consciousness mode of function” does have virtues which surpass the primary mode in certain regards. My associated theory is essentially that “the subcontracting of decisions over to a personal subject, may be helpful to address unforeseen challenges.”
But why then would instinctive function not evolve that works in the same essential manner? Without feeling any “sensations,” such as “thirst,” “envy,” or “hope,” observe that a purely instinctive subject theoretically might develop “programming” such that it effectively functions as if it is “thirsty,” “envious,” or “hopeful.” This subject might thus behave exactly as if it has “self,” without there actually being any — or without a potential for “punishment” or “reward” to indeed occur.
Perhaps our engineers will ultimately answer this question for us — perhaps they will ultimately build machines that are so “advanced” that this theorized limit to purely instinctive function, does indeed become apparent. If so they might generally conclude: In some ways we seem to have reached “the limits” of our engineering potential — apparently our “instinctive machines” will remain deficient in areas such as (this, this, this, and so on). In order to improve them in these specific regards, it does not seem possible for us to continue building them in ways which merely “simulate” a punishment/reward dynamic. Instead, apparently we would need to build them such that they truly are punished/rewarded, or machines with consciousness through which to feel good and/or bad based upon the conditions that we specify. In order to engineer them more effectively regarding the above difficulties, apparently we would need to give our machines actual “selves.”