Chapter 11: The Social Entity and Subject Identification


I’ve now presented a mechanism through which the personal entity may effectively “expand” to encompass the welfare of others — empathy can indirectly do this in the sense that our perceptions of the sensations which others experience, can bring us personal sensations as well. Furthermore, theory of mind varieties of sensation can have both opposing and conforming relevance in the sense that our non-conscious minds may punish or reward us through our perceptions of how we are perceived by others and ourselves. To this point, however, the essential focus of this discussion has simply been the personal entity. The only true “social scenario” presented so far was an auxiliary chapter four reference to “the aggregate sensations of life over time” (which is an idea that will again be addressed here). Personal dynamics have been emphasized to this point, since the nature of the personal entity should be established before combinations of personal entities may properly be considered. We will now move on to “the social entity,” however, and thus formally consider the nature of “social good.”

The personal entity has been defined specifically as the sensations which a given subject experiences. This is technically presented to be instantaneous, since a subject does not directly experience past or future sensations. Nevertheless “memory of the past” and “anticipation of the future” seem to bring present sensations, or something that effectively “expands the self” in this respect. Thus we apparently do perceive continuous self, even though this perception may technically be considered an illusion.

Also, the self over time has been defined as the aggregate sensations compiled by associated instantaneous personal entities. This “aggregate” compilation of individual personal entities should be a useful idea, since it permits each moment of sensation to impart its own specific magnitude to a given subject over time.

It is from this last specific idea that the concept of “social good” will now be built. If “personal good over time” is defined by adding positive personal sensations, as well as deducting the negative, then “social good over time” shall be defined in quite the same way, though regarding the sensations of two or more such entities. Thus a policy which generally promotes the happiness of two or more individuals will be good for this society (or “subject”), and to the specific degree that the policy promotes the aggregate quantity/quality of these experienced sensations over a given period of time.

As mentioned, a social scenario was presented in the second discussion of the auxiliary chapter four. If aggregate negative sensations for “all future life on Earth” will ultimately be greater than these aggregate positive sensations, then nonexistence would benefit this enormous future society as a whole, and to the exact degree of this discrepancy. Thus if a great meteor were to instead hit our planet to quickly terminate what would otherwise be a very long and horrible compilation of future sensations, then this event would be “good” for this subject as a whole. (Note: In chapter four I had not yet presented a social model of good, or even chapter six’s aggregate sensations theory of personal good. At that point I simply left this specific idea to be presumed.)

Now that this social model has formally been presented, however, it will be used to help demonstrate the following point: Given the vast number of personal and social subjects that any specific argument of “good” might conceivably take, each such argument will require a unique subject from which to be based. In all cases where a specific subject is not stated or at least understood, an associated argument will remain “functionally incomplete.”

I view this subject identification condition to be a major reason that the “Ethics” branch of “Philosophy” has not yet developed any generally accepted theory (…which is not to imply that other varieties of philosophy have been superior). My general perception here is that a good will be identified from a given ideology, and perhaps even regarding a specific subject. But then a conflicting good will tend to be presented, though regarding a separate subject. Apparently it’s been perceived that ideologies which contain such conflicting demonstrations of good, must not be “valid.” Conversely from my own perspective, the good of separate subjects need not correspond. Indeed, we should expect that the interests of separate subjects would naturally diverge somewhat, simply given they are not indeed the same.

In the chapter four scenario as mentioned, observe that if existence does ultimately represent a horrible sensation based tragedy for future life on Earth in general, then this is still just one subject out of countless potential others that might be considered. If “life on Earth” were to continue there would presumably still be a great society of individuals that would experience anywhere from marginal to tremendous happiness during the course of their existence. So for this separate subject, the termination of life would be a great tragedy, and even if it would be better for life in general if the much larger subject (which does still include “the happy”) did not exist.

To practically demonstrate one apparent “subject identification error” in the field of philosophy, consider a prominent argument against the just presented aggregate sensations model of social good. This objection was given in 1984 by Derek Parfit, a distinguished English philosopher. Apparently in a traditional sense I may be classified as a “total utilitarian,” and this places my theory squarely on the wrong side of Dr. Parfit’s “Mere Addition Paradox,” or “Repugnant Conclusion.”

Dr. Parfit observed that from the presented Utilitarian premise, a huge society in which each member experiences sensations that are only slightly positive, is defined to be “more good” than a much smaller society where each member experiences extremely positive sensations, as long as the total magnitude of these strong sensations fall short of those which exist in the much larger but weak sensationed society. Furthermore from this perspective, a huge society where members experience only mildly negative sensations would be worse than a much smaller society where members experience amazingly horrible forms of torture, as long as the compiled strong sensations do not reach the magnitude of the compiled slightly negative sensations of the much larger society.

From here Dr. Parfit apparently reasoned that it would be “good” for us to essentially breed countless people with marginal levels of happiness rather than a more manageable number of extremely happy people. Given his assertion that such implications may only be deemed “repugnant,” he apparently then assumed that this ideology cannot be an accurate description of reality.

Regardless of any sensations of repugnancy which a given observer might experience, however, his conclusion does violate my subject identification rule. Here we have two separate subjects with competing interests in the sense that if one happy society exists, then the other happy society does not exist (and similar assessments can be made for the negative scenario.) But because we are discussing unique subjects, his logic does seem to fail — the good of one subject need not conform to the good of another. Once a subject is defined, it’s only the welfare of this entity that is up for consideration. And while an assortment of subjects may be addressed in a given discussion, each such “good” must always have a distinctly separate classification.

To take the presented idea to a much greater extreme than Dr. Parfit’s scenario, notice that an only marginal and momentary bit of happiness for one person, should not be sacrificed from this perspective to cause the endless ecstasy of millions. The endless ecstasy of millions would obviously have far greater magnitude, though if this society is not the subject of consideration, then there will be no inherent relevance to the true specified subject. Observe that if the individual in question does make this very minor sacrifice which greatly benefits a given society as mentioned, then by definition this will be “bad for him or her,” and specifically to the (admittedly weak) magnitude of surrendered positive sensation. When the subject is instead “the society,” however, then that minor sacrifice would be amazingly good for this defined subject, and to the exact degree of “perpetual ecstasy that millions will thus experience.”

Instead of comparing separate subjects from a given ideology and then finding a term such as “repugnancy” from which to assess any discrepancies between them, Dr. Partif might have simply observed that a great assortment of potential subjects exist, and that each harbors its own associated interests. If “subject identification errors” have been as widespread as I believe they have, this should help explain why philosophers have not been able to develop generally accepted theory regarding “the good/bad dynamic” so far.

((The paragraph here is another example of the “general” or “non-subject-specific” exploration which I believe has greatly hindered progress in philosophy.

Consider the following assertion of Jeremy Bentham, or the long departed philosopher that is often credited with “inventing Utilitarianism.” He essentially asserted that “right” is “the greatest happiness of the greatest number.” Though our theories do each endorse the same essential ideology, I nevertheless disagree with this specific statement. In my view “the greatest happiness of the greatest number” is only useful to perceive as “the greatest good,” or that which is “right,” for a given society as a whole. Other defined subjects, both inside and outside of any specified subject, will have their own associated interests, and according to my theory these interests are based upon “the instantaneous sensations which the associated personal entities experience.” So instead of looking for a concept which is “generally good,” I believe that philosophers must look for the good of specified subjects from various potentially useful ideologies, so that a theory which does seem consistent with observation, might ultimately achieve acceptance among scientists in general.


To now continue on with the ideas of Dr. Parfit, by using the term “repugnant” he also raised the question of whether or not it’s possible for the realities of good to also be distasteful. But perhaps the scenario which he presented was a bit “subjective.” I personally do not perceive “great quantities of mildly positive sensations” to be inherently inferior to “small bits of extremely positive sensations,” for example. Nevertheless, apparently it’s not difficult to find implications of my own “total utilitarianism,” which are repugnant to virtually all. But if such unfortunate implications do indeed exist, would this mandate that I’ve built “a non-useful” model of reality? Let’s take a look…

We know that there are people who experience a great deal of anger in their lives, and presumably these punishing sensation are caused by the circumstances of their existence. People under desperate conditions may thus find greater reason to lie, steal, hurt, or kill others in order to help make themselves feel better in various ways which are associated with their poor circumstances. I also presume that there are people who lack any capacity for empathy, whether this is through “environmental” or “genetic” reasons. Furthermore apparently certain men are inclined to rape children, perhaps often somewhat due to unfortunate sexual desires.

The ideology which is presented here states that “when a sensation experiencing entity promotes its positive and/or diminishes its negative personal sensations, then this is inherently good for it.” Thus the instantaneous personal entities which a man experiences while raping or otherwise harming a child, will be good for him to the extent that his associated positive sensations surpass his associated negative sensations. So if this man is not sufficiently punished for such behavior, whether through social prosecution, remorse, or anything else, then this event may end up being good for him aggregately over the course of his life, given the associated sensations which he experiences.

Social considerations may be considered here as well. Presumably this rape will be negative for a society which includes both the man and the child, and specifically to the extent that the child’s associated negative sensations surpass the man’s associated positive sensations. Societies in general obviously have reason to discourage such behavior, perhaps through legal threats, perhaps through greater surveillance, perhaps through testing so that fetuses with genes which tend to bring such unfortunate sexual tenancies may be terminated, and so on.

But it isn’t mandated from the premise here that a given rape will always be negative to a given society. If the raped child is not included in the definition of “one arbitrary society,” for example, then this social good could shift somewhat in the direction of an included rapist that does now happen to be more happy. Perhaps a given child that is not actually harmed by the man, will still go on to experience a long life of extremely negative personal existence, and also hurt many others along the way. But if the man were to instead kill this child in such a way that the child experiences no negative sensations, and perhaps only then to rape the dead body, then this act might end up being positive for the man (if he does ultimately enjoy this), for the child (if he/she is indeed spared from an existence which would otherwise be quite miserable), and for the larger society in general (if the full social negatives that this child would otherwise inflict, will now go unrealized). So under these specific parameters, the presented theory shows a positive result for each of these subjects. But given such amazingly repugnant implications, does this prove that the presented ideology cannot be an accurate description of reality? Shall we make the assumption that Dr. Parfit seems to have made, or that “good” must inherently be “non-repugnant”?

Once again, however, the stated subject identification condition does suggest otherwise. Notice that competing interests should naturally bring somewhat repugnant implications to one side, whenever the interests of another side prevails. Thus it should be difficult to demonstrate that “good” must inherently be “non-repugnant,” and from my own ideology or any other, perhaps unless it’s actually found that the interests of all subjects do always correspond perfectly with all others. But even in a situation where each associated subject does find a specific example of good to be extremely repugnant, is this a sufficient premise from which to dismiss the presented model? Here I would suggest that our perceptions of reality, which may indeed involve “the repugnancy sensation,” need not be the deciding factor given the entire spectrum of sensations which may be experienced.

((This paragraph demonstrates my position on the terms “ethics” and “morality.”))

“Ethics” and “morality” are commonly substituted for the term “good” itself, even though from an academic perspective ethics is often interpreted as “Philosophy’s discipline for determining good,” while morality may be considered “a general code of good.” I’m also quite comfortable using these terms under their common interpretations however. If you choose to harm someone for personal gain, it may then be useful to classify you as “immoral” or “unethical.” And is it possible for the human to behave in such ways? Of course. But can this kind of behavior also be “good” for a given subject? This would depend upon the definition of “good” which happens to be used. Observe that if there is some kind of “afterlife” where a subject is highly punished for such behavior, then perhaps not. From my own “mortal theory,” however, it does seem quite possible for “immoral/unethical” behavior to also benefit a given subject in an ultimate sense.


((In these four paragraphs, “good and evil” are considered as opposed to the “good and bad” terms which have exclusively been used here.))

My perspective on “good and evil” is relatively standard in the sense that they are defined to only exist to the extent that behavior is “freely chosen.” For example, if I violently kill someone, and do so “with a perfect freedom of choice,” it may then also be useful to consider me “evil” in this respect. But if I’m instead perfectly compelled to do this (and perhaps even fight the people who squeeze a knife into my hand and force it into a victim’s body) I will then display “no evil nature,” even though this event might still be termed “bad.”

In my own work I do not use the good and evil terms because they seem to only exist the extent that an observer’s perspective happens to be “limited”… while I seek to describe “ultimate reality.” The larger that an observer’s perspective happens to be, the more reasons which should be seen to motivate a given example of behavior. Thus from a “perfect perspective,” our freedom/good/evil potential should logically disappear.

Though this reasoning may generally be somewhat clear, perhaps many “give up on it” rather than follow it to its conclusion — and in turn just assume that we have an inherent capacity to behave in “ultimately good” and “ultimate evil” ways. Note here that I happen to assume that reality is “perfectly physical,” and therefore I perceive existence to be “perfectly deterministic.” If existence thus transpires exclusively through cause and effect dynamics, then future reality must occur exactly as it does occur — millions of years from now a given event must happen exactly as it does, perfectly based upon the preceding physical circumstances which reality perpetually mandates. For “physical determinists” like myself, the thought that “the human” might alter a reality which otherwise functions through “cause and effect dynamics,” is simply contrary to our perception of how existence functions.

One prominent challenge to this perspective concerns popular interpretations of the Heisenberg Uncertainty Principal — or essentially that the existence of “an ultimately specific reality,” is also “not accurate.” My own interpretation of this principal, which simply references “human ignorance,” is far less popular (though it was indeed taken by the great Albert Einstein). I can still observe, however, that all attempts to use Heisenberg’s principal “as a medium through which the human becomes perfectly free to choose behavior,” seem amazingly ridiculous! Regardless, there should be little harm of using the concept of “good and bad” rather than “good and evil,” since this idea does not depend upon the highly suspect concept of “ultimate human freedom.”


4 Comments on “Chapter 11: The Social Entity and Subject Identification”

  1. Christophe,
    With a bit of divergence I do generally agree with your interpretation of the CRA. Nevertheless I think we should formally acknowledged that our side has failed so far. Popular modern consciousness theories such as GWT, IIT, AST, and so on, flout the CRA, as well as Block’s China brain, Schwitzgebel’s USA consciousness, and so on. So is it now sensible to say that our side has done a good job? I don’t see how it could be. Surely it’s not right for us to postulate that people on the other side have simply been too stupid to understand. Instead we should ask why it is that our arguments have been unpersuasive, and certainly how they might be improved. Otherwise I don’t think we actually deserve validation, and even if we do happen to be on the right side of this debate.

    Like yourself I’m not satisfied with using Turing’s heuristic for testing the existence of consciousness. Observe that we presume consciousness in all sorts of non human organisms which can’t pass this test. So why decide that consciousness requires one of our machines to not only be like one of these organisms, but also be so ridiculously beyond them that it can actually talk with us intelligently? Even humans can’t do so without years of instruction. Here we’re not only asking such a machine to phenomenally experience its existence, but in the fashion of an educated version of the most evolved creature on our planet. And yet because we can easily make non-phenomenal computers answer yes/no questions in a way that seems reasonably human, and theoretically more and more advanced algorithm processing might permit them to produce more and more detailed responses, the presumption has been that some day our machines will experience their existence by means of algorithm processing alone!

    The logic of this seems quite unfounded to me. Apparently our side made a critical mistake by even entertaining their proposal at all (and I include Searle’s CRA to be part of this flawed entertainment). Today I think we should formally admit our mistake and proclaim that it’s not the appearance of human language that should be focused upon, but rather the appearance of phenomenal experience itself. Thus we should build thought experiments (and hopefully empirical experiments), that are focused upon phenomenal experience itself, not a far distant relative such as spoken human language. In these efforts consider a full rendition of my own such thought experiment.

    When a hammer strikes your thumb, experimental evidence suggests that algorithmic information about this event is conveyed to your brain by means of sensory nerve signals. Furthermore it’s thought that these algorithms are processed there by means of the potential firing of billions of neurons connected by trillions of synapses. In general when the brain receives applicable sensory nerve algorithms, it processes them to create output algorithms which go on to animate the function of various machines such as muscles which control heart function. The one exception to this rule however is that it’s popularly theorized that unlike other output function, the brain needn’t animate any phenomenal experience producing mechanisms whatsoever. It’s thought that when sensory nerves provide the brain with algorithms associated with a whacked thumb (which it then processes by means of associated neuron firing and whatnot), that the processing alone should create that experience. This is odd in the sense that all other known brain function depends upon its algorithmic animation of mechanisms, such as the ones that produce hormones like testosterone by means of the gonads. What are the implications of phenomenal experience existing through algorithm processing alone without any other mechanistic instantiation?

    Observe that if nerve information correlated with a whacked thumb were instead expressed on paper, and that paper were properly processed by a vast scanning and printing computer into another set of information on paper correlated with associated brain processing, then something here should experience what you essentially do when your thumb gets whacked! I have no idea what would do this experiencing since were merely discussing properly inscribed paper that’s put through a machine to properly create another set of inscribed paper. But theoretically “processed information alone” theorists hold that something here would nevertheless have such an experience.

    The main reason that I consider this scenario ridiculous is because in a causal world, algorithms should only exist as such by means of associated instantiation mechanisms. The algorithm of a shopping list should not exist as such except in respect to something that’s able to decipher it. The algorithm of a VHS tape should not exist as such except in respect to a machine that’s able to display its video content. Nevertheless popular consciousness theorists today seem to bypass a famous “hard problem” by instead theorizing the existence of algorithms that are mechanism independent. As such these proposals seem no more falsifiable than the existence of God.

    If we had paper with markings on it that was effectively correlated with the information that nerves send your brain when your thumb gets whacked, and it were somehow processed into another set of paper with markings on it effectively correlated with your brain’s response, you might ask what it should take in order for something to actually feel what you do? Naturalism suggests that this second set of paper would need to be fed into a machine that’s set up to use it by means of the same phenomenal physics that your brain uses for you to experience existence. Furthermore I suspect that Johnjoe McFadden is miles ahead of anyone else on this front, and so such an experience would exist in the form of a specific field of electromagnetic radiation that such a machine would produce. Unlike unfalsifiable algorithm only proposals, this is testable. I’ve actually proposed a way to test his theory. The problem seems to be that money in general is spent to support popular algorithm only proposals, and they’re fundamentally untestable given their instantiation mechanism void.

    I’d also like to say that I agree with the spirit of your position of how inferior human engineering happens to be when compared against the work of evolution. I don’t believe we should ever expect to create machines that are able think even somewhat comparably to us. Still there are two reasons that I think we should beat this particular drum softly. The first is political. If we explicitly deny each outlandish musing of the other side about how the human will create advanced thinking machines, then we come off as small minded oppressors. Here I’d advise us to instead smile and be polite. Theists should be handled this way as well I think. But secondly there seems to be a danger of them legitimately faulting our naturalism here. If we posit that some mysterious “life stuff” is the reason that the brain is able to create human thought, then we effectively become substance dualists like Chalmers and all the rest. Instead I think we must formally acknowledge that if a human made machine were to implement the physics used by the brain to create phenomenal experience, then that machine must also experience its existence in some manner by means of such physics. Though Searle was also very explicit about this possibility, I’ve noticed many to get him wrong on that count, and even once a now prominent graduate student of his!

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s