What I Stand For Part II: Meta Science

In the first installment of this series I addressed a topic that’s been important to me since I was a teen. It was then that I realized that each and every one of us are self interested products of our circumstances, and in contrast with our standard righteous moral assertions otherwise. I decided that the reason we deny and denounce our nature in this regard is because “selfish” is exactly what we are. Thus we rightly reason that others will tend to treat us better if we can convince them that we consider our happiness to largely depend upon their interests. Here outwardly denying our personal selfishness can ironically be the selfish way to go. (It’s essentially salesmanship 101. Have you ever met a successful salesperson who didn’t portray themselves to have their customers’ interests in mind while at work?)

I refer to this as the evolved social tool of morality, and it seems to exist to help a fundamentally self interested variety of creature function more productively under social settings. So strong does this tool seem to be that our mental and behavioral sciences have not yet been formally taken up in an amoral capacity, which is to say in the manner that harder forms of science are naturally explored. I believe that without formally acknowledging that “feeling good” constitutes the welfare of anything for which welfare exists, that these forms of science will continue to expand sideways rather than up.

(The behavioral science of economics seems to be an exception to this rule — it actually is founded upon the premise that happiness constitutes the value of existing. Apparently this has been permitted because it’s far enough from the central field of psychology to not overtly challenge the social tool of morality. Thus here we have a behavioral science with a central thesis from which to build, and so has been able to display a “hardness” which more central behavioral fields like psychology and sociology have not yet been able to.)

While I consider this observation to have done me well over the majority of my life, after I began intensely blogging on the topic I realized that science was in need of effective principles of epistemology and metaphysics as well as axiology, and so all three domains of philosophy. And though science sprang from philosophy in recent centuries to utterly transform our species, modern philosophers seem to jealously guard their independence and fervently denounce criticism for their inability to come up with any agreed upon principles in these regards so far. Thus here we seem to have a key to progress in science, but it formally lies under the heading of a defensive community which seems culturally bred to believe that it’s unable to provide what scientists need in this regard. So how might science nevertheless gain various effective principles of metaphysics, epistemology, and axiology from which to work, and so progress where it hasn’t so far been able to?

My suggestion (and the topic of this post) is that we’ll need to develop an initially small community of “quasi-philosophers” who differ from the standard variety in the sense that their only understood mission would be to become a respected community armed with various agreed upon principles from which to help science function more effectively than it has in the past. If traditional philosophers in general wouldn’t mind letting this second community reside under some separate classification of “philosophy”, then I’d also endorse that term. Otherwise however this community could be referred to as “meta scientists”.

Furthermore I’ve developed four such principles from which to potentially help improve the institution of science. The first of them is my single principle of metaphysics. It observes that science is destined to fail to the extent that causality itself does. The practical effect of this would be to divide general science into an entirely causal institution, whereas scientists unable to comply with this mandate would need to cultivate their ideas under a “causal plus” second variety, or effectively one open to the existence of “magic”.

Next there is my first principle of epistemology which gets into the nature of definition. It obligates an evaluator to accept the implicit and explicit definitions of a proposer in the attempt to assess what’s being proposed. This lies in contrast with the “ordinary language” approach of certain philosophers, though we do each seek rules of order so that evaluators might consider various issues by means of the definitions which proposers actually mean. The difference seems to be that I consider it important for theorists to freely develop definitions associated with the nature of their ideas, whereas they consider it important for theorists to work under the constraints of common terms usages (and though these in themselves seems highly variable and so can create definitional disparities).

My second principle of epistemology essentially addresses the procedure of science, and goes beyond it as well. It states that there is only one process by which any qualia based function (which is to say the kind which is “conscious” as I define it), consciously figures anything out. To do so it takes what it thinks it knows, or “evidence”, and uses this to assess what it’s not so sure about, or “a model”. As a given model continues to remain consistent with evidence, it tends to become progressively more believed.

Then finally there is my single principle of axiology, which I began with here and was the topic of the first post of this series. If we cannot formally acknowledge the nature of “value” which drives the function of the conscious entity (and whether this failure is fostered by the social tool of morality or not), then we simply should not have a solid platform regarding our nature from which to effectively build. Apart from the science of economics this seems to be the situation that our mental and behavioral sciences remain in today.

Though these “meta scientists” would be proposing such principles (and they might be made up of scientists, traditional philosophers, or otherwise), the ultimate arbiter of which principles this community accepts would be regulated by their practical effect within the institution of science itself. This is to say that scientists in general should end up validating or refuting what’s proposed. Otherwise most notably I suspect that our mental and behavioral forms of science will continue to expand sideways rather than up.


18 Comments on “What I Stand For Part II: Meta Science”

  1. Hi Eric,

    Interesting read. As I go through this I wonder if the “softness” of a science might simply be “the number of uncontrolled variables.”

    • Thanks for stopping by Ben. Yes science should tend to be more difficult when there are more uncontrolled variables to wonder about. That’s not quite what I was getting at however. My point was that science in general needs various generally accepted principles of metaphysics, epistemology, and axiology from which to function better than it does today, whether mine or others. Note that meteorology, macro economics, and earthquake prediction tend to have lots of uncontrolled variables, though scientists do still seem to have a pretty good grasp of associated basic principles.

      Furthermore regarding the axiology component I believe that our mental and behavioral sciences fail particularly given the social tool of morality. It’s essentially that facing up to the purpose of conscious existence (which I consider “to feel good”) tends to conflict with the standard moral theme that doing good for others is actually what’s best for us. So without a founding axiological premise from which to begin from, central mental and behavioral sciences tend to have a great deal of trouble. Conversely the science of economics seems able to accept feeling good as the purpose to conscious existence, and perhaps because it’s far enough from the center to not as blatantly challenge standard moral notions.

      At some point I’ll get to a third installment of this series and so provide a practical description of how conscious life functions in a basic capacity. And since we’re human, we will be the focus.

      • The trouble is Eric that it may well be that the sole purpose of conscious existence is to facilitate reproduction. And that everything else associated with consciousness is a mere by product of that. In short, we humans tend to look for reason and meaning where they may be none. In which case we can and should create our own meaning and veer far away from what evolution has saddled us with.

        • I agree with your point as far as I can tell Anthony, though let me make some clarifications and expansions to see how they sit with you.

          By “facilitate reproduction” I suspect that you mean evolution’s effective purpose for life is “genetic survival”. It’s only the continued survival of various iterations of life over time that should matter to evolution, regardless of how much reproduction occurs at a given point.

          Initially none of evolution’s machines should have had purpose in themselves (contra panpsychism), though evolution’s purpose for those machines should have been their genetic proliferation. So there shouldn’t have been anything it is like to exist for any of these organisms, just as there shouldn’t be for our machines. No personal welfare. For evolution the problem here seems to be that it couldn’t write productive enough logic statements from which to deal with more “open” circumstances all that well. So just as our robots can’t be programmed to, for example, remodel someone’s bathroom, there were various circumstances where evolution couldn’t provide its machines with effective “If…then…” kinds of instructions.

          Evolution seems to have effectively overcome this by means of some sort of physics by which existence feels good/bad, and so the emergence of the agent. Here we have sentience, which is to say qualia, which is to say consciousness itself (as I consider the term most productively defined). This element of the organism would thus have the purpose of feeling as good as it can from moment to moment.

          Evolution seems to use this tool to cause sentient forms of life to function better under the more open parameters that it can’t effectively program for (like that bathroom project). So instead of a damaged body being dealt with by means of instructions about how to stop such damage (as we might today for a robot), evolution also provides “pain”. Here it’s then somewhat up to the agent to figure out how to fix the problem and try not to let it happen again, given associated punishment.

          To finally get to your point, note that if there were a god responsible for the horrors of our world, then there’s no question that he’d be fucking evil. But evolution is a process rather than an agent, and so shouldn’t be characterized as a blamable entity. We can still say “fuck evolution” however, which I think is exactly what you’re suggesting now. Yes I’ll sign that petition!

          So how might we effectively fuck evolution? I think it would help if we could formally grasp our hedonistic nature. So far the social tool of morality seems to prevent the field of psychology from amorally pondering our nature in this way (unlike the more distant science of economics, which does happen to be founded hedonistically). Thus, I think, psychology largely fails.

          Once it does gain effective models of our nature however, we should then be able to use those descriptions to help us better lead our individual lives as well as structure our various societies. So I think happiness for humanity is coming, though first a major paradigm shift is needed to harden up our mental and behavioral sciences.

  2. Eric,
    I don’t have much to add from the many discussions we’ve had on this.

    One point on definitions I can’t ever recall making before, is that maybe the way to handle differing definitions is to identify that there are different definitions involved and to clarify and delineate them. As I mentioned before, I think this is something philosophy can do pretty well. Unfortunately, it often doesn’t.

    • Mike,
      Yes delineating and clarifying different standard definitions for the terms that we use is certainly an important job. While both reading and writing I’m constantly looking up terms, whether to help understand what someone might be saying, or to decide how to more effectively communicate a given idea. But dictionaries and encyclopedias seem to already do that pretty well.

      My first principle of epistemology goes beyond generic definitions to help the theorist build more effective theories. Here it obligates the evaluator to accept a given definition in the attempt to grasp what’s being said. Often enough people today seem to talk past each other through separate definitions, or may even reject a given theory from the premise that false definitions are being provided. I mean to help fix this basic structural problem in academia. My EP1 suggests that definitions should instead be assessed on the basis of their usefulness in a given context. For example I consider the existence of qualia itself to be an extremely useful definition pertaining to the existence of “consciousness”, though can assess other consciousness definitions for their potential usefulness as well.

      • Eric,
        One of the problems with just accepting definitions is often they come with an ontology that we think is wrong. I just recently had a long conversation with someone about an aspect of a scientific theory. After a long discussion, it became evident that we have fundamental disagreements about what that theory is. I could have accepted their definition of the theory, but it would have made any assessment of the actual question irrelevant.

        Which is to say, sometimes the best thing to do is identify that the actual disagreement is definitional. But it’s also true that often the line between a definition and the actual ontological point under discussion is broad and blurry.

        • Mike,
          The thing here is that you’re talking about a problem which exists today, or before my EP1 has become generally accepted. Thus you might think, “Wow things are already pretty bad, but should only get worse if it were conventional to accept the definitions of others when we assess their theories. In that case I’d sometimes need to make various ontological concessions that I consider wrong.” But no, accepting a given person’s definition for the purpose of assessing their arguments should never mandate any ontological concessions.

          I suspect that I could walk through the discussion that you mentioned to demonstrate how my EP1 could have helped clarify the issues that you were having. I wonder if you could at least share which theory you were disputing?

          I suppose that you and I have various ontological disagreements, and so define certain terms in conflicting ways given those disagreements. For example there is something that you and most people call “computationalism” which I call “informationism”. But walking through that could spill into issues beyond assessing my EP1 itself.

          It might be best to consider my EP1by means of something that neither of us are very invested in. That doesn’t seem too interesting to me however. Instead let’s try something that we’re at least on the same side of, or panpsychism.

          Someone could define all causal function to be conscious function, and my EP1 states that I’d be obligated to accept this definition in order to assess their ideas. But if so then it seems to me that the “consciousness” term itself would become useless in its former sense. There’d be nothing different from our function than the function of anything else.

          Let’s say that someone defines consciousness as the existence of qualia (which I consider generally productive), and says that everything has at least trace amounts of this. Here they might propose some complex formula regulating its magnitude (perhaps like integrated information theory?). But does the proposed theory effectively predict the existence of my own qualia? Does experimentation reveal that a certain kind of drug will eliminate it, while another will alter it, and yet another will have no qualia effects at all? How much qualia does it suggest is experienced by a rock, a tree, an electron, or the USA as a whole? In all cases here I can accept their definition for “qualia”, but merely for the purpose of assessing its effectiveness.

          • Eric,
            I’d prefer not to get into the details of the other conversation.

            But your consciousness examples are good. I’ve encountered naturalistic panpsychists who define consciousness as interaction with the environment. If we accept that definition, then panpsychism is true. Of course, as you noted, that only shifts the problem from the nature of consciousness to the nature of human and animal consciousness.

            One thing when assessing definitions themselves is how much information they actually provide. Saying that consciousness is a set of qualia may help if someone knows what “qualia” are but not “consciousness”, but it provides no real information beyond that. Likewise with the “subjective experience” or “like something” phrases, which are vague and broadly synonymous with “consciousness”.

            Of course, the issue is when we try to get more specific, we immediately run into disagreements. IITers have their definition, as do GWTer, HOTTers, and many others. No one really accepts anyone else’s definition, which makes any empirical data unpersuasive. And that fact alone, I think, is worth understanding. I think it tells us something important about out intuitive sense of the word “consciousness”.

            • Mike,
              Here I need to keep focused because I could easily be tempted to go off on all sorts of tangents. And indeed, there’s no reason to not explore those paths as well, but not before the stated business is resolved. Do you agree that when a person accepts a given definition for the “consciousness” term (or any other) in order to assess that person’s ideas beyond the definition, that no ontological concessions are mandated? My point is that there are more and less useful definitions in a given context, not that any truth exist in any of our definitions. If you disagree then I’d like your reasoning.

              Furthermore I consider the “qualia” term as commonly understood, as well as “something it is like” and “subjective experience”, to be extremely useful definitions for the consciousness term in general. Thus if we were to build a computer which can experience what we’d call “pain” if we experienced it, then it would also be effective to say it’s “conscious” in that regard, and even if all it were effectively set up to “do” was have that experience. I consider this definition quite specific and effective. All that remains unknown here is how nature causes us to have qualia.

              If IIT, GWT, HOTT, or any others happen to be aligned with this qualia based definition for consciousness, then I’d consider this hopeful. It would mean that they’re simply proposing different ways to answer how to create the same qualia / consciousness. Would you say that such alignment does exist for these three and perhaps others?

              • Eric,
                I definitely think it’s possible to accept a definition for purposes of assessment and discussion. But I do think there’s enormous value in clearly delineating the definition(s) so the parties understand and agree on what they’re discussing. You mentioned accepting implicit definitions. I’m okay with that, but I think in the process of accepting them for discussion, they should be made explicit.

                For example, how would you define the term “qualia”? You cited pain as an example. I assume hunger, fear, and joy would also count as examples. But what about the visual discrimination of red? Would an affective component be required? If so, how would you define “affect”?

                I think the answer for IIT, GWT, and HOTT toward qualia would depend on which definition you’re using. IIT might be onboard for some philosophical definitions, but GWT and HOTT would likely have a narrower conception, with many advocates objecting to the term “qualia”, preferring something with less philosophical baggage.

                • Mike,
                  So you wouldn’t say that theories like IIT, GWT, and HOTT are unreservedly down with what’s commonly understood as “qualia”, as consciousness in itself? Thus something “in pain” won’t also need to be assessed as “conscious”? That sounds about right to me. As things stand I’m not exactly a fan of any of them. Given invested interests I suspect that they’ll continue morphing to prevailing winds to help stay in the game however things end up going in science. In many ways my own ideas are far more disprovable.

                  It sounds like you’re good with my EP1 but ask for a bit more as well, or for implicit definitions to be made explicitly. That’s fine. I only say that the reviewer should try to accept implicit definitions as well because in practice explicit definitions for everything would be asking a lot. This is a good faith measure where the reviewer at least seems to want to understand what’s being assessed. It’s something to at least mention given that we all have our own agendas to promote.

                  So how might I term “seeing red” regarding affect, and thus qualia, and thus consciousness? That’s a good question and my solution is only partly clear to me. For it we must delve into my psychology based model of brain function. Thus here it’s my definitions that you must try to use in order to potentially understand and so make effective assessments. I’m going to skip the “how qualia arises” question and go right into the psychology. Hopefully that will help keep us on track here.

                  I define “affect” or “valence” as the element of reality by which existence feels good/bad. It’s a punishment / reward dynamic that might exist by means of a god, and/or when certain information is processed into other information, and/or through some objective sort of physics, and/or whatever else. Regardless the point is that I define this as the exclusive good/bad element for anything that exists. Here there must inherently be “something it is like”, and by definition not otherwise.

                  Note that different magnitudes of affect or valence will inherently provide the experiencing entity with different information, and this is given that each will feel differently to the experiencer. Furthermore evolution (or whatever) seems able to potentially provide an amazing array of different valence types to such an entity, and so this should correlate with an amazing array of associated information to it as well given the differences. For example it’s natural for us to think of toe pain simply as something which hurts. But toe pain also provides the experiencing entity with information about its state — otherwise the feeling shouldn’t be all that helpful in an adaptive sense. So not only shall affect feel good/bad, but it should also potentially provide relevant information to the experiencing entity.

                  In this light, here’s what I’ve come up with regarding visual images such as “red” in terms of affect and thus consciousness. This is provided to the experiencing entity essentially in more of an information based capacity than an affect based capacity. In any case it’s the singular affect experiencing entity each moment that’s provided with what we know of as “red”, as well as other forms of qualia such as “sour”, “anger”, and all the rest.

                  Ultimately in my psychology based model I classify three forms of conscious input (valence that drives it, information that informs it, and memory that joins it with past states), one form of processor (thought by which it interprets such inputs and constructs scenarios from which to advance its valence based interests), and one variety of non-thought output (or muscle operation). So in total the consciousness dynamic may be seen as a functional computer in itself that’s created by means of the more standard brain form of computer. It is from these definitions that I believe an extremely effective model of life function exists.

                  • Eric,
                    The question is, what is commonly meant by “qualia”? It’s worth noting that it’s a philosophical term, not a colloquial one, so the baggage that philosophers have loaded onto it is relevant. I’d say most of the theories would be more fine with the common meaning of things like “feeling” and “perception”.

                    On not being disprovable, I’d point out that the usefulness of any proposition is proportional to how potentially falsifiable it is. If you can’t identify what might potentially invalidate your ideas, then it becomes difficult to see them as insights. They could be philosophical conclusions, but often those are personal decisions with no fact of the matter.

                    Anyway, my guiding principle is clarity, which is why I favor explicit definitions for any definition where assessment of the proposition may hinge on that definition.

                  • Mike,
                    I haven’t noticed philosophers or educated people in general loading the “qualia” term up with extraneous baggage. Can you give me an example? What I’ve noticed are examples of perceptions, feelings, and things like that which I consider quite productive for the term in general. Furthermore I consider the experiencer of this kind of thing productive to call “conscious” in this sense, and so discount theory which implies that something which has some qualia needn’t be conscious. Yes they have the right to make such a definition given my EP1, though I consider this misleading since most presume the consciousness term to address whatever experiences anything.

                    Anyway in my last comment I attempted to help join the qualia of red with the affect term. It’s been a bit of a concern for me to square “feeling good/bad” with visual information, since the visual might not feel like much. Regardless in my model it’s only important in the end to state that the experiencer receives visual information in a qualic capacity, not to get into how visual information might exist as affect in itself.

                    You seem to have misread my comment on provability. I consider my model more disprovable than IIT, GWT, and HOTT given that they seem more interpretive. And in truth they’re ultimately about how to create qualia whereas my model concerns the implications of qualia’s existence itself. So any of them could be effective though my model would go beyond them into the realm of psychology.

                    I realize that lots of people are interested in your new mind uploading post so it may be hard to keep up with responses. Don’t worry about me right now if at all inconvenient. I might even say some things on that post. You may have noticed that I sometime try to have productive discussions with interesting people over there other than yourself. We’ll see.

  3. Fred M R says:

    Hello, Eric.
    I’d like to note that, when you say that ‘we rightly reason that others will tend to treat us better if we can convince them that we consider our happiness to largely depend upon their interests.’, I am reminded of Alexis de Tocqueville’s concept of enlightened self-interest, which he introduced in his book Democracy in America.
    Alexis de Tocqueville was one of the first classical sociologists, and his concept of enlightened self-interest entails that ‘people voluntarily join together in associations to further the interests of the group and, thereby, to serve their own interests’.
    I’d also like to know if you are acquainted with views on cooperation and altruism in evolutionary biology, such as Kropotkin’s work on the subject matter. According to Kropotkin, not all human societies are based on competition as those of industrialized Europe. And there is an inherent altricial nature in human beings for, otherwise, there seems like there is no way to explain why human beings would spend even two decades taking care of their own children without receiving something in exchange (unless you take the gene-centered view of evolution by Hamilton and Dawkins).

    Concerning your first metaphysical claim, how would you deal with the issues of hypothetico-deductive methods in which “from theory X an experimenter deduces and thus predicts that, under certain circumstances, C will be observed” and if C is actually observed during certain circumstances, then the theory X is true?
    According to authors such as Chiesa, if C is found to be the case, it cannot be argued that therefore X is the case because C could result from other processes or mechanisms included in a competing theory. This is basically the fallacy of affirming the consequent. What kind of scientific method would your community approach then? An inductive one or a different one from both the inductive or hypothetico-deductive?

    • Those are some great questions Fred! I’ll start with the second since it may help a bit with the first.

      My first principe of epistemology mandates it to be the obligation of an evaluator to accept a given theorist’s definitions for the purpose of evaluating that person’s models. So if you define “welfare” to be constituted by how well someone is thought of by others for example, as well as theorize that helping others tends to promote a person’s [the helper’s]
      “welfare”, then I must accept that definition in order to assess whether or not your model happens to be a useful reduction of our nature. Unlike the ordinary language perspective, here I can’t refute your model by claiming that welfare is actually something different [such as feeing good]. My EP1 states that there are no true or false definitions, but rather more and less useful definitions in a given context. I consider definitional misconceptions and disputes to be incredibly destructive in academia today, and so propose this principle to potentially help. [I decided to edit this a bit later for clarity above in brackets.]

      Regarding your “hypothetico-deductive” observation however, this seems more related to my second principle of epistemology. It states there is only one process by which anything conscious, consciously figures anything out: It takes what it thinks it knows (or evidence), and uses this to assess what it’s not so sure about (or a model). As a given model remains consistent with evidence, it tends to become progressively more believed. But note that this is quite consistent with authors such as Chiesa since an inductive rather than deductive proposal is made, and the end result merely provides “belief” rather than “truth”. So that’s how I handle the fallacy of affirming the consequent — my model never violates it.

      Regarding “we rightly reason that others will tend to treat us better if we can convince them that we consider our happiness to largely depend upon their interests”, I’m actually being cynical here, which is to say the opposite of Tocqueville or modern evolutionary biologists. The point is that if on some level you grasp someone else’s desires, then that person may help you with yours if you’re able to convince them that your interests happen to be their interests. I’m talking about “sales” here, and all human relationships involve selling at least somewhat. Some of us are simply far more skilled in these regards than others. This is to say that some of us are far better “liars” than others of us, and whether such falseness is realized explicitly or not.

      I use to talk quite a lot with someone who was extremely into evolutionary biology. He’d tell me about how in certain primitive tribes no child would ever accept candy from a westerner and simply eat it, even if no one else would know. Instead the child would joyously take it around to share with as many as possible. He considered that to be the natural human state which the modern world should strive to get back to.

      Here’s where I can bring up Chiesa’s “it cannot be argued that therefore X is the case because C could result from other processes or mechanisms included in a competing theory”. My own ideas not only account for the behavior of this child, but why it can be best for someone to take advantage of others, why some people choose to be parents even when it’s a bad decision for them, and standard observations of human behavior in general I think. The position is formally known as psychological egoism. To sufficiently explain this would take more than would be appropriate for a single blog comment, but could be achieved in a longer two way conversation.

      For the moment let me say that in a small tribal society where everyone is quite visible, social pressure can influence a person to magnify altruistic displays, and especially if such behavior has been highly marketed within. It’s the same for people selling cars in big anonymous societies, though here good sales people will be skilled at sizing a given person up and telling the right story, whether it’s true or false.

      Alexis de Tocqueville would be a great example of a person who has succeeded by telling people a story that they want to hear. Note that we’re somewhat able to explore hard sciences objectively, and this should be a strong factor in their hardness. We haven’t been able to display much objectivity in our soft sciences yet however. But science is still a relatively new institution and should prevail in the end. I suspect that our mental and behavioral sciences will harden up as they become explored with similar amorality as is standard for hard sciences.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s