Chapter 4: ((Why Has “Self” Evolved?))

PhysicalEthics4-20-14

((Beyond the potential engineering virtues associated with adding sensations to motivate the function of certain varieties of subject, a second auxiliary discussion is also presented here. This second discussion concerns the value of existence which is “tragic.”

The stated “sensations mode of function” raises a question of engineering — exactly how might this dynamic have been useful in the evolution of associated subjects? Observe for example how quickly our machines have advanced, presumably without any ability to experience sensations. This suggests that evolution could have (and indeed must have), constructed very dynamic examples of purely “instinctive life.” If so, however, then why has a separate “operating system” also emerged which effectively causes existence to have a “good/bad” potential? What exactly are the engineering virtues of permitting (and forcing) subjects to experience positive (and negative) sensations?

Though it might be helpful here to understand exactly which subjects contain “self,” and by elimination which are purely “instinctive,” I have only broad presumptions at my disposal. Perhaps subjects that are at least as advanced as fish can experience sensations, while the “simplest,” such as microbes, plants, and fungi, cannot. I currently have few convictions about “the middle,” or essentially where this transition occurs.

For example, in many respects the ant does not clearly suggest that it harbors “self,” which is to say that it experiences positive and/or negative sensations. Because it seems perfectly willing to sacrifice its body for its society, perhaps it’s more similar to “a robot” than something that might experience negative sensations when its body becomes damaged. Nevertheless, if the ant harbors a consciousness from which to experience sensations in any capacity at all, then from the presented model it also has an associated “self dynamic.” With my very limited grasp of ant biology, however, I currently have few convictions in this respect.

I do, however, presume that the least “complex” subjects, such as microbes, plants, and fungi, harbor no personal entity. I base this presumption on the premise that at some point subjects should exist that are so “basic” that they have no consciousness, and thus experience no sensations. From the presented model, “instinct” is the mechanism that underlies the operation of all subjects in our diverse ecosystem — it presumably pumps blood, builds cells, battles infection, and so on. Thus we should expect to find examples that function exclusively through “non-sensation” mechanisms.

So then what might be the engineering virtues of a “punishment/reward,” or “personal relevance” dynamic? If all relatively “advanced” subjects experience sensations, then why might instinct alone have been an insufficient means from which to develop them? My theory here is essentially that a punishment/reward dynamic helped simplify evolution’s “programming demands,” and did so in a way that promotes functionality by addressing “unforeseen challenges.”

Notice that under the sensations mode of function, evolution does not mandate what a given subject must do under a given set of circumstances, while it does essentially need to decide this for purely instinctive varieties. Rather than specify what must be done given associated circumstances, decisions from the personal mode are effectively “subcontracted out” for a subject itself to evaluate — the subject is encouraged to “consciously decide how to proceed” given the perceived punishments and rewards that exist for it. If punishing/rewarding sensations are set up to generally motivate behavior which promotes genetic proliferation, then this type of subject might succeed under various situations that would be difficult to assess/predict, and presumably would be difficult for exclusively instinctive mechanisms to productively address. From the presented model, sensations are the element which “drives the conscious mind,” and apparently consciousness itself can be an effective auxiliary mode of function.

For example, consider the lengthy series of “programming” that a purely instinctive subject should generally possess which specifies how it shall acquire food. Then consider a subject that is very similar in many ways, though this one also has the potential to experience sensations. Observe here that certain instinctive instructions might safely be omitted in the subject that can experience sensations, since this one might “personally” figure out how to get fed to some degree. Thus evolution might denote how “hunger” feels, how various things “taste,” and so on, and this subject will then have motivation from which to essentially “figure out what to do” given its perceptions of the various punishments and rewards that exist for it. Under this dynamic perhaps certain “unforeseen challenges” will then have more potential to be overcome, given that evolution isn’t required to address them specifically. Instead, here there is “a consciousness” from which to sense personal existence, and in this regard the potential for it to “understand” what is happening. Thus perhaps various situations might then be exploited given that existence can be personally relevant — or the subject’s “programming” becomes somewhat “subcontracted out” through personal motivation to figure out how to proceed.

I’m suggesting that if a human were to exist that could not experience sensations, and thus could not be punished or rewarded, then its conscious mind would have no motivation from which to function. This subject would “live,” with hair that grows, flowing blood, and so on, but have no incentive to use its otherwise complete conscious mind in order to figure out what to “do.” This idea is essentially that evolution developed “sensations” in order to motivate subjects so that they would in this manner “program themselves.” And given the advanced nature and proliferation of sensation experiencing subjects, apparently this “consciousness mode of function” does have virtues which surpass the primary mode in certain regards. My associated theory is essentially that “the subcontracting of decisions over to a personal subject, may be helpful to address unforeseen challenges.”

But why then would instinctive function not evolve that works in the same essential manner? Without feeling any “sensations,” such as “thirst,” “envy,” or “hope,” observe that a purely instinctive subject theoretically might develop “programming” such that it effectively functions as if it is “thirsty,” “envious,” or “hopeful.” This subject might thus behave exactly as if it has “self,” without there actually being any — or without a potential for “punishment” or “reward” to indeed occur.

Perhaps our engineers will ultimately answer this question for us — perhaps they will ultimately build machines that are so “advanced” that this theorized limit to purely instinctive function, does indeed become apparent. If so they might generally conclude: In some ways we seem to have reached “the limits” of our engineering potential — apparently our “instinctive machines” will remain deficient in areas such as (this, this, this, and so on). In order to improve them in these specific regards, it does not seem possible for us to continue building them in ways which merely “simulate” a punishment/reward dynamic. Instead, apparently we would need to build them such that they truly are punished/rewarded, or machines with consciousness through which to feel good and/or bad based upon the conditions that we specify. In order to engineer them more effectively regarding the above difficulties, apparently we would need to give our machines actual “selves.”

 


4 Comments on “Chapter 4: ((Why Has “Self” Evolved?))”

  1. Hariod Brawn says:

    ‘. . . decisions from the personal mode are effectively “subcontracted out” for a subject itself to evaluate — the subject is encouraged to “consciously decide how to proceed” given the perceived punishments and rewards that exist for it.’

    Are we certain this happens? Have we not now established [since Libet] that the timing of neural events demonstrates that the appearance of decision making is in fact a post-dictive illusion – i.e. an explanation after the event?

    ‘This subject might thus behave exactly as if it has “self,” without there actually being any’

    This is precisely what is happening, it seems to me; it’s not a hypothesis.

    Am I misunderstanding you on the above? Thanks for the article!

    • Hello Hariod,

      I hadn’t gone through this for a while, and now I’m quite embarrassed that I put you through it! The same thing happened when Mike Smith was confused about my “Cocktail Party.” It’s simply been too educational and fun to invest in the sites of others rather than get my own house in order. But if people such as yourself are going to stop by, then I really should get to it!

      Given that I’m as “hard” a determinist as they come, I couldn’t agree with your observation more. But an important key to my position can be summed up with the term “perspective.” From the amazingly ignorant perspective of the conscious subject (such as myself), freedom does still seem to exist. It’s from here that I was theorizing a “subcontracting out of decisions.” Now let’s see if I can summarize the point of this pathetically rambling sub chapter.

      The question is, why did “utility” evolve? (I’ve now ditched “sensations.”) Why was there a need for some life to feel good as well as bad? I suspect that this helped improve “autonomy.”

      If we want to build a useful robot today, we obviously must design it to deal with foreseen challenges. Once “minds” were at the disposal of evolution, or “central information processors,” it surely also needed to program these creations with “If…then…” types of statements. But apparently adding progressively more of that to deal with the various challenges of an open environment, was not in itself sufficient. Thus evolution might have effectively said, “Okay, now I’m also going to make these subjects somewhat figure things out for themselves. Here they’ll experience pain, hunger, fear, joy, hope and so on, to punish and reward them for failing and succeeding.” I suspect this is essentially why utility and consciousness emerged — there were too many unforeseen challenges for evolution to specifically account for, so it began to provide incentive for its creations to figure things out “personally.”

      As I’ve said Hariod, I’m very happy to have found such a sensible group of people over at https://selfawarepatterns.com/. I wonder if you’d be interested in a private chat?

      Email: thephilosophereric@gmail.com

      • Hariod Brawn says:

        No need in the least for any apology, Eric, although some may be in order from myself for the indelicate way I raised my query.

        Are you saying that the sense of agency, insofar as it being attributable to a putative self, is a result of what we imagine to be a ‘sub-contracting out’ in a fabricated/illusory off-line processing to the homunculus we never question the existence of, the supposed inner agent quietly assumed to be ‘the self of me’?

        In other words, the actual process of decision-making is forever concealed to conscious awareness, so a false sense of agency evolved in its stead, one that becomes so ingrained as a given it seems absurd ever to question its existence or nature.

        How are we so ubiquitously fooled?

        I think proprioception plays a role here – the sense of ‘me doing (something)’.

        This brings us to your next point about evaluative functions. Perhaps the limbic system (can I use that term?) is where the actual decisions are made. So, what ‘feels’ right intuitively, is selected and then what follows is the post-dictive illusion of conscious decision-making.

        If something like that is about right, then it fits your hypothesis, I think(?), and so the next question seems to be: what happens when we no longer believe in the mind-trick? I don’t mean what happens when we develop a theory that we believe fits reality, but what happens when we in our bones, so to speak, ‘know’ the self to be thoroughly discredited, seen as a put-up job?

        Is the self an artefact of evolution that is past its ‘sell-by’ date?

        P.S. I am somewhat over-burdened currently helping a close friend put their book together, Eric, and so as far as is possible, am trying to keep dialogue on-blog and limited. That said, then of course you have my email address in your dashboard; so if it were something that needed privacy and was not likely to become protracted, then do fire away. 🙂

  2. Thanks Hariod,

    The opportunity to gain myself an intelligent new friend is really all I’m looking for with any private discussions between us. I’ll certainly pass you a note if I find something to say that I think you’ll appreciate. Regarding free time, I’m actually weeks late responding to a psychologist friend, and of course the current state of my website remains deplorable. I seem to have become extra defensive as my ideas progressively anger more people online. There seems to be a natural conflict, since I believe that our mental/behavioral sciences will require radical change, though many seem quite invested in the current paradigm.

    Apparently I’ve not yet clarified my position above, so let’s see if the following helps:

    I find it useful to metaphorically consider myself as “reverse architect” rather than “reverse engineer.” I believe that good architecture will be needed first and foremost in order for our “soft sciences” to progress, though architecture seems easily tainted by the engineering side. I consider it unfortunate that something as basic to these sciences as consciousness, remains such a great mystery. Only once we’ve developed good architecture, will I be comfortable permitting engineers to physiologically assess such models. So yes you were right to ask if things such as the limbic system apply, and regarding my own models, no they don’t. In fact I don’t even permit myself to use the term “brain.” Instead my ideas concern that which is “mental,” as well as an opposing “mechanical” idea. From this perspective a “homunculus” theoretically could be used, though I have no use for a “little man in the head” concept.

    In the end I suppose that I won’t be able to sufficiently explain why I think evolution developed consciousness, as well as why we can effectively be “free” under a fully determined environment, without running through the basics of my position itself. See if this works:

    “Utility” (and I do hate this term!) seems to represent the fundamental unit of value, or good/bad, throughout all of existence. Thus from this theory the more positive utility that something experiences at a given moment, the better existence will be for it at that time, with negative utility being the opposite. Conversely for anything which experiences no utility whatsoever, existences remains perfectly inconsequential.

    Furthermore this “utility” stuff seems to be manufactured in the non-conscious mind to serve as the exclusive motivating “input” to the conscious mind. The non-conscious mind may be thought of as a standard oblivious computer, with the conscious mind as something which is instead incited to function by pain, thirst, itchiness, humor, empathy, and phenomenal experience in general. Of course consciousness should be all that you, I, or anything, knows of existence.

    So why were these oblivious non-conscious minds that necessarily came first, inadequate to do what evolution required of them? Why couldn’t such minds be “programmed” to do what conscious minds do (such as the theorized p-zombie)? I suspect that it wasn’t possible for evolution to effectively “foresee” the sorts of programming that would work best for various kinds of conscious life to have (such as how to build a car, let alone drive one), and that even if it could make such predictions, loading up massive amounts of information, as we must do with our computers, simply wouldn’t work well enough. Thus evolution seems to have “cheated,” or added the utility which motivates the function of conscious computers. Yes evolution does still mandate how toenails will grow, hearts will beat, and so on, but conscious subjects also decide various things based upon the punishments and rewards which they foresee regarding their personal existence. Thus from here evolution didn’t need to design each and ever rule regarding the function of a bird, for example, but rather forced such creatures to personally figure certain things out. Our robots should never be able to do such “cheating,” that is unless we overcome “the hard problem” of making them feel good and/or bad, and thus render them conscious. Here we’d be somewhat “subcontracting out their programming.”

    Then secondly, we can say that conscious life has “agency” in the sense that it has an extremely limited perspective regarding what’s going to happen. Reality functions as it necessarily must given causality, but we’re relative idiots in this regard. A bird should effectively be free to choose what it does from its tiny perspective, though from larger perspectives it should simply be “a puppet.” Regardless, consciousness certainly still works.

    Cheers!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s