The Archives

Extracts from public discourse

Philosophy of mind

The problem of evolutionarily irrelevant strong emergence (07 April 2017)

1. We can't measure sensation (mental properties themselves). We can only measure a) the neural correlates of what we presume to be sensation or b) self-report of sensation. Our inference of sensation/sentience is based on the logical extension of our personal belief/experience of sensation to like organisms, machines, etc. but there is nothing in the known laws of the universe which define their nature/qualia (eg the redness of red) or when they emerge. I think we need to clarify this concept of "empiricism" to the audience (which is really the combination of the epistemological primacy of sense data and non-reductive physicalism). There is nothing faulty/incoherent with non-reductive physicalism, however it is critical to distinguish such from the concept of empirical observation (measurement).

2. This distinction prevents us falling prey to the kind of positivism which purports that everything accessible to us is accessible also to the empirical method. Physical is by definition (in physics) what is empirically measurable, and there is therefore a significant proportion of known (inferred) reality which is formally non-physical. Under the philosophy of "physicalism" however (which is somewhat of a misnomer according to the definition of physical), we assert that all of our experiential reality (mental properties) are mapped to physical reality (observables). There cannot be any phenomenological experience which is not grounded in nature.

3. Furthermore, this distinction prevents us from automatically assuming that materialism (non-reductive physicalism) is a satisfactory ateleological philosophy of mind. Under naturalism, a physical system evolves perfectly according to the laws of nature. Therefore, ostensibly emergent mental properties are redundant (see Jaegwon Kim on non-reductive physicalism; in particular his thesis on overdetermination). The organism (including its central nervous system) functions perfectly according to the laws of physics (be they deterministic or indeterministic) without any unnecessary strong emergent phenomenon. Strong emergent properties are qualitatively distinct from physical emergent properties (like crystals) in that they cannot be empirically observed.

Joseph Tagger: Can you prove you're self-aware?
Will Caster: That's a difficult question, Dr. Tagger. Can you prove that you are?
(Transcendence, 2014).

The apparently arbitrary assignment and nature of mental properties (evolutionarily irrelevant existence; our brain functions and evolves perfectly fine without them) leads most contemporary/secular philosophers of mind to argue either a) eliminativism, b) 'informationism', c) panpsychism, or d) simulation.

a) Eliminativism: that mental properties (or their perception of physical non-reducibility) are an illusion. Yet assuming that we take both our internal sentience (existence/experience) and our extrapolation of this sentience to like organisms as true (although such cannot be empirically verified), what is its basis: why does it exist? Informationism, panpsychism and simulation attempt to explain why some systems (peculiar subsets of the universe in space-time; eg human CNS, Pentium III, etc) have this apparently emergent phenomenon.

b) 'Informationism': That mental properties are the natural product of complex arrangements of matter/energy above a given threshold of complexity (sentience is just as if not more fundamental than observables; in that the universal system "knows they are coming"). Informationism assumes that mind is the inevitable outcome of the arrangement of matter/energy in sufficiently complex states. Such however requires nature to be geared towards the creation of sentience, and is as such not indistinguishable from pantheism.

c) Panpsychism: That all physical entities have (the capacity for) associated mental properties. There is no distinction between physical and mental substances, though unlike physicalism the material does not take precedence over the mental. Panpsychism asserts that consciousness is an inherent property of all particles (energy/matter) in the universe. Panpsychist models are however not without their own limitations. Apart from their animistic inelegance (hypothesising sentient rocks for instance), nothing in the laws of nature define which systems (collections of particles in space-time) should combine to form complex indivisible centres of consciousness like you or me (the Combination Problem).

d) Simulation: That the material world as experienced by us is not the underlying construct of mental existence but merely the designated method for generating its experience. Simulation (like substance/Cartesian dualism) pushes back the problem of the underlying construct/laws of mind to another universe. This philosophy of mind has elements of theism (alien gods).

3. There is also another critical although somewhat unrelated limitation in the positivist analysis. Although one can observe a consistency between nature (regulated behaviour or causality) and logic, one cannot use nature to formally derive logic. This is a circular reasoning fallacy. One must assume reason as an axiom in order to make/process our empirical observations (follow the empirical method). For this reason logic (like mathematics) is declared to comprise non-physical abstract objects.

... It is important to note that mental properties are not the same as physical (empirical) consciousness. Imagine a computer with a model of self (physical consciousness). It acts like an intelligent conscious being and its CPU (brain) and speaker (mouth) inform us that it is self-aware. We can measure this model of self and how it has been encoded in the computer ("I", "HAL", etc). Yet we have a choice (or must come to some philosophical conclusion as to) whether to believe that this model of self corresponds (maps) to an internal reality, or, conversely, whether it is merely a software program telling us what it has been programmed to tell us.

In the case of mammalian/human evolution, I think it is very likely that physical consciousness evolved for the purpose of enhancing the species' survival (ie it is adaptive, as opposed to being a byproduct, as referenced by [X]); but this says nothing of the reality of internal existence. Mental properties are functionally and therefore evolutionarily irrelevant from a physicalist perspective. The central nervous system of homo sapiens is declared to operate according to the laws of physics, and such laws only reference physical (eg neuronal) properties. A substance dualist could argue that mental substances (and their properties) serve some biological function, but Cartesian dualism has many problems not worth examining here (eg interactionism).

... Mental properties are a sentient being's internal experience of objective/physical reality (this 'stream of consciousness' will include things like the smell of a particular flower or the colour of a particular region of one's field of view). Our own mental properties by definition can be observed (sensed/felt) by us, but; 1. We have no direct access to another being/machine's mental properties (and we only have indirect access if we make some philosophical assumption about their correspondence to observables; eg physicalism). 2. Mental properties cannot be measured (empirical observation). One cannot measure or confirm the existence of one's own or another's internal experience using the empirical method. In regard to [X]'s analogy, (under physicalism) our brain will encode some representation (through its neural networks) for this colour or smell, and this can be measured.

... all human sensual experience (under naturalism) corresponds to a heavily processed reconstruction of reality (including object recognition, motion detection, categorisation etc). When the creature is in a "conscious" (aroused) state this experience is generally derivative from some external reality, but it all nonetheless corresponds to physical reality (the neuronal processing of objects, concepts, etc). Note even in our dreams/hallucinations/thoughts (internal verbalisations) one is still experiencing physical reality. It just so happens that the part of physical reality being experienced doesn't correspond to any reality outside of the organism itself.

The empirical method (measurement) doesn't need to be conducted by a sentient being (one could imagine an intelligent non-sentient machine deriving many truths about the world using the method and then stating its conclusions in a text box). This is the advantage of the method; it is entirely objective (based on its assumptions). One could argue that the Copenhagen Interpretation of QM suggests that measurement might require a sentient being however (many have). Taking its proposition of denying local realism, and then making the additional assumption that the wave function collapse occurring during measurement is caused by sentient observation. (Note this has got nothing to do with 'the observer effect'; the physical consequences of measurement on a system). Many however suggest that the wavefunction collapse during measurement is not a product of sentient observation, and suggest various alternate conditions for its collapse (for example decoherence and a spontaneous "minicollapse"). Likewise, even if sentient observation were required for measurement (and the outcome of reality itself/the collapse of the probabilistic wavefunction into a definite solution), it would be difficult to argue physicalism (given the primacy of mental reality; something akin to simulation theory). It does however solve the redundancy problem.

... The scientific method requires empirical measurement and such measurement has no access to mental properties (it therefore cannot use them per say). Measurement only has access to physical properties (eg the state of a particle, neuron, etc). Under physicalism such physical properties are assumed to correspond to mental properties - however one would struggle to find a neuroscientist (speculating about philosophy) who adopts reductive physicalism (a 1 to 1 correspondence between mental and physical properties) based on how information is distributed across neural networks. Most physicalists uphold non-reductive physicalism; specifically the thesis of supervenience (that there cannot be a change in a substance's mental properties without a corresponding change in its physical properties), or attempt some form of eliminativism. Under the peculiar form of the Copenhagen Interpretation discussed, measurement requires mental properties but it does not use them per say (it still has no access to them).

... Not only is the empirical method inherently probabilistic (one can only conduct an experiment so many times to rule out anomalies; hence its determination of p values against a null hypothesis), but it is based on philosophical assumptions which are inherently unprovable (ie axioms). More generally, these include the validity of logic, the existence of self, the validity of mental properties to capture/experience a "real" (objective/physical) world etc.

The empirical method is based on an assumption of causality in the measured system; it was not designed to vindicate philosophers of the existence of a causal relationship between their personal experience of reality and reality itself. In fact, this relationship will vary heavily due to subjective biases (psychophysics). One does therefore not conduct experiments using subjective measurements (unless the underlying construct is difficult to measure otherwise; eg in psychology, in which case any differences between experimenter ratings must be considered systematics to be partialled out).

To quote the Lady Jessica from the Children of Dune; "All proofs inevitably lead to propositions which have no proof! All things are known because we want to believe in them!"

... If an internal mental reality exists and has a relationship to the actions of the organism, then we would expect this internal reality to accurately represent external reality in a successful species.

... Note in the case of psychophysics (eg rate how hot your foot feels on a scale of 1 to 10, is the horizontal rod longer than the vertical rod, rate how green the apple is, etc) one is not empirically measuring mental properties, one is measuring self-report (beliefs/conceptions) of mental properties. Under physicalism, the creature has evolved to believe in and value mental properties (the existence of "itself" as an observer/sentient being), but the reality of this encoded belief is irrelevant to its evolution - it only need be adaptive or otherwise a byproduct of related physical processes. a) One could say that the individual participants of the experiment are "measuring" (or classifying) their mental properties under the assumption of a correspondence between mental and physical reality. b) Likewise, one could say that the experiment is "measuring" mental properties under the assumption of a correspondence between the mental and physical reality of the participants. c) Furthermore, one could say that the experiment is "measuring" the participants' internal experience of this (non-empirical) "measurement" process (a) under the assumption of a correspondence between the mental and physical reality of the participants. But in none of these cases has empirical measurement of mental properties occurred. The only thing which has been empirically measured is self-reported beliefs of an organism.

An example of empirical measurement is a system that detects and counts the number of specific objects moving across a specific region in space-time. One could employ either machines or humans to do this task (both will be imperfect at the task, and must be calibrated/taught accordingly). But at no time is one empirically measuring mental properties. Likewise, one could obtain/measure self-reported experiences and propose a direct correspondence between what is seen (eg specific colour of a specific region in their field of view) and what can be independently verified (eg specific neurons being fired), but this is not an empirical hypothesis. It cannot be denied by observation and the prerequisite of empirical science is that one can at least in theory devise an experiment which would demonstrate the hypothesis to be false. Even if one conducted the analysis on themselves (took as true their own self-reported experience) no one else could independently verify it. Yet just because a proposition cannot be empirically verified, it doesn't make it a bad assumption. Such assumptions are necessary for things as simple as respectful communication (and others like logic are necessary for any form of communication).

... Regarding "physical consciousness" ("a computer with a model of self"), as applied to a human being. A model of self may well have a survival advantage. One could imagine a skynet with a model of self versus a skynet without a model of self. The skynet with a model of self is going to have a higher probability of wanting to protect its circuits because it believes in more than just its circuits; it believes in the existence of "itself" as a non-physical being. Whether or not that self actually exists is irrelevant; it is another question entirely.

... the question is irrelevant with respect to the evolution (survival) of the system. As you point out, if there is such a correspondence between mental and physical properties (as most contemporaries would agree on), it is extremely philosophically relevant.

This is why philosophers debate the preconditions for strong emergence; when does it occur. Do mental properties just magically get assigned to complex carbon organisms in some primordial garden (as David Chalmers asks pointedly in his paper "panpsychism and protopanpsychism"), or is there a fundamental but presently unknown relationship between their apparent emergence and the underlying physical system (b, c, d). Perhaps we should start to question the existence of mental properties given their empirical irrelevance (a), etc.

... From a scientific perspective, the only thing that is relevant to the evolution of living systems are entities which can be measured using the empirical method. If the physical (natural) laws are deduced(/inferred by induction) based on these empirically observable properties, then there is no reason to assume the existence of any other phenomenon (non-observable properties; ie mental properties) as necessary for these processes to occur. Thus the amazing physical process (neural network processing/program) known as "physical consciousness" which enables the organism to survive in increasingly complex and threatening environments bears no weight on the existence of empirically non-observable properties (mental properties), and therefore their relevance to its evolution.

Under the assumption of physicalism (a common type of substance monism) some substances pertaining to living systems (which for the purposes of the argument I will add are arbitrarily delineated subsets of space-time) have both physical and mental properties, and so one could declare the mental properties relevant to the degree that they are associated with particular parts of these natural systems that are known to evolve (which for the purposes of the argument I will add are also arbitrarily delineated subsets of space-time; neural architectures). This apparent arbitrary delineation of these apparent (under the assumption of physicalism) emergent properties makes philosophers ask what defines such delineation and the preconditions for mental existence. Is it perhaps information (b) - the fact that certain parts of the physical universe are highly complex and process a lot of information? Etc. To put it another way, there is no functional difference between AI that are programmed to have a self but don't and AI that are programmed to have a self but do (where their programs are identical).

The "conscious ego" may be described in a specific scientific literature, corresponding to a precisely defined empirical construct (such as a tendency for the humanoid organism to exhibit the behavioural symptoms of self-awareness, volition etc). It may even refer to an internal philosophical consciousness, under the assumption of a correspondence between physical and mental properties as is commonly the case in post-behaviourist eg cognitive psychology (assumes that internal consciousness/experience is an emergent property of information processing in the brain). The existence of philosophical consciousness (mental properties) is however outside the scope of science to defend. We might use the empirical method to discover very similar living systems to ourselves (singular) which exhibit very similar patterns of information processing, and philosophically deduce that these probably also exhibit mental properties, but the question is ultimately irresolvable (at this stage of our understanding of the universe; within the current paradigm).

... Decoherence depends on the environmental exposure of the quantum system (reduction of the probability of encountering inadvertently measured phenomenon or particles interacting with such inadvertently measured phenomenon). The level of decoherence experienced by a particle upon measurement will depend on the degree of information gained with respect to its momentum/position and the degree to which they constrain its possible pathways. For example, relative to the position and width of two slits through which the wavefunction must pass; if the measured position/momentum of the "particle" enables the experiment (or experimental measuring device <- undefined) to eliminate the possibility of the particle travelling through one of those slits, then it will be considered to have decohered. But if there still exists uncertainty in this question, the level of decoherence experienced by the "particle" during the measurement will be a function of this uncertainty. With no decoherence due to measurement (ie a complete failure of measurement) or external/environmental interference, the "particle" will behave according to its wave properties and proceed to pass through both slits simultaneously before interfering constructively/destructively with itself and collapsing to a definite state (at some other experimental measuring device such as a photodetector). The position at which it will be measured can be estimated statistically based on the probability wave function. For classical (large) phenomena like neurons, although they do interfere with each other, the level of decoherence observable by existent measuring devices is negligible. But this does not avert the problem of what causes the minicollapse (to a definite rather than a probabilistic state; even if that state were almost so certain to be definite).

... If one looks at the founders of quantum theory; they were all discussing this possibility (the role of the conscious observer in measurement) back to the days of Schrodinger's cat. The problem is that nothing within the Copenhagen interpretation defines what collapses the wavefunction in its totality (irrespective of decoherence; such final collapse being coined "minicollapse" in the context of decoherence). It is also the reason why many physicists (speculating about philosophy) take seriously apparently less intuitive QM interpretations (like Everett's many worlds). Furthermore, it is a reason why some were keen to bring back a deterministic interpretation (De Broglie-Bohm). It is an example of a philosophical anomaly.

... The problem of not knowing 'how it works' is equivalent to raising explanations for why it appears to work the way it does under the assumption of naturalistic mind (at least a-c).

The issue however with equating strong emergent phenomena (mental properties) with weak emergent phenomena (like wings, crystals, neurons) is that weak emergent phenomena are reducible to the physical construct. Only with a platonic outlook does one even believe that wings exist, as something more than ("over and above") the underlying physical system. With enough computational resources one could simulate the emergence of wings from the laws of physics and some initial conditions.

Yet regardless of their platonic/nominalistic outlook, there is a qualitative difference between a network of neurons firing and one's sensation of lavender. Even with enough computational resources, one will not necessarily be able to simulate the emergence of the sensation of lavender from the laws of physics and some initial conditions (it depends on the preconditions of such emergence). This is why philosophers don't take for granted that the 'how it works' explanation will belong to the same category of explanations (weak emergence) that derive atoms from subatomic particles, molecules from atoms, life from molecules, complex life from living cells, computers from complex life, and self-referential computers from their less intelligent or adaptive predecessors. Weak emergent systems may be supervenient on their substrate but this does not imply that every supervenient system (like naturalistic mind) is weakly emergent.

The question of whether the emergent property of wings exist, or the emergent property of a self-referential computer exists is relevant to evolution, but the question of whether the machine (organism) is self-aware is not.

I agree that if we could find out what were possible within the constraints of nature we could know what were inevitable - but the problem is that we do not know what is possible. We don't know the preconditions; as you point out we can only guess at them at this stage. Furthermore, discovering (guessing) that a phenomenon is inevitable given its environment is not an explanation (pertaining to internal consciousness, this is an example of the misapplication of the anthropic principle). One must still explain (eg provide some naturalistic explanation for) for why the neuronal-mental correspondence/mappings exist (hence a-d). What are the prerequisites for sentient beings - perhaps there are 5 identical sentient beings for every CNS, perhaps there are zero sentient beings for every CNS, perhaps there is one? What in nature specifies the rules, because the current laws of nature (physics) make no reference to such phenomena.

... [Why can't mental properties be reduced to physics?] Because mental properties have no functional impact on the system. If one considers natural law (physics) to be a complete description of the behaviour of the universe (a prerequisite of naturalism), then only physical properties can affect the evolution of the system (eg neuronal/ionic information processing, genetic code, etc): non-physical properties by definition cannot.

[How should we know that mental properties will not necessarily emerge from a computer simulation?] The point is that we don't know. The fact we don't know something means that we must consider all the possibilities. And if it so happens that a i) complex organism or ii) computer simulation of a complex organism can produce emergent mental properties (although we will arguably never be able to demonstrate this under the current scientific paradigm; see transcendence quote), then we must ask why. Does it just happen magically because it was designed that way (teleology), or is there some fundamental reason for the emergence (eg b, c).

There is nothing wrong with making arbitrary philosophical assumptions in science - people do it all the time (eg methodological naturalism, non-reductive physicalism, etc). It would be very difficult for science to progress without these. But it is not the job of philosophy to make arbitrary assumptions and then make no effort to ask why these are being made. The reason there is so much variation in historical/intercontinental philosophical thought is because people are not ideological in their beliefs and are willing to question the reason for their assumptions.

Perhaps there are reasons for making such assumptions however? The problem is that a blind adherence to inherited western materialism is not a very good one - because it emerged from teleological thought. I gather that we are trying to produce systems of thought that are not dependent on teleology.

Reason (17 April 2017)

What I find fundamental about reason is that in order to speak about it one must assume that they are reasonable, but an assumption of their reasonableness (under physicalism) requires the physical construct to have evolved reason - the processing of information according to the rules of logic. Therefore, any communication is reliant on the assumption that our particular universe evolved in such a way that reason (conformity to logical rules) would be adaptive for the organism.

The problem of the sole (30 April 2017)

Why am I me and not you? Why is my experience of existence mapped to physical entity x and not mapped to physical entity y?

... Physical properties are uniquely assigned because they are part of a bigger indivisible system. Are we suggesting that mental properties are also? The problem is that to suggest some substances don't have mental properties but others do is to introduce differentiation - and there must be a reason for this differentiation.

To interpret "you" or "me" as a physical entity in this context is to assert an unnecessary reduction which avoids the question. Perhaps I could be you (rather than me) if indivisible centres of awareness are randomly assigned to physical entities. But we must then ask what determines the mapping?

Does the universe itself (nature) generate a set of discrete instantiations of sentience? Then why would a new one be created? Why not use the same one? (This is Arnold Zuboff's argument). Is the fact we don't have any memory of alternate references of experience (like we don't have memories of our infancy) a sufficient argument?

... In order to analyse a phenomenon one has to not make any implicit assumptions regarding it. For example a) reductive physicalism (which few adhere to as although mental properties may be mapped to physical properties they are not reducible to physical properties given how information is distributed across neural networks), or b) "emergence by necessity" (the assumption that mental properties just appear given a sufficient level of physical complexity - like when a machine declares itself to be conscious - without explanation).

"Ghost in the machine" could be interpreted to mean anything from substance dualism to property dualism to simulation theory so I can't recommend the phrase here. Max Tegmark (an "informationism" architect) does however recommend the book/film when discussing simulation theory in the context of numerical simulation of physical systems and VR. In terms of property dualism, I figure it is more probable than a ghost without a machine, a machine without a ghost, or a machine with 73 ghosts.

... but what if I were you and you were me? What part of reality would differ to accomodate for this fact?

... What if you died and they reconstructed you? Would it still be you or would it be someone else?

... Try to imagine variations on this scenario (from Zuboff's "one self: the logic of experience"); - what if I added an additional 795739528073 atoms to its neocortex? - what if I created two identical copies of the reconstruction?

... What we want to know is that final change to the reconstructed physical system (e.g. x neural connections) where you no longer experience reality and someone else does. Because if there is such a change it implies something determines when a new instantiation of sentience is assigned, and if there isn't - that we live in a pantheistic world.

With respect to finding out how the brain works, I agree that this is an extremely worthwhile enterprise for a number of reasons (in fact so important that a significant proportion of all research should be directed towards the human connectome). But assuming we found out how it works, and it behaved according to the known laws of physics (or any others discovered within the existing paradigm), mental properties could confer no advantage on the physical system. Nor could we ever know for certain which systems exhibited them. So it begs the question, what are they there for; and why would they be restricted to such complex information processing systems? Perhaps they are an inherent property of all matter/energy and consciousness exists in gradations, etc.

Furthermore (although this is getting increasingly off topic), I hope you appreciate that we have just defined a method to resurrect a body, which moreover according to the materialist framework will be the same person. It is fortunate the laws of nature are so fine tuned as to necessitate an infinite multiverse. Because with an infinite multiverse there are going to be an infinite number of exact copies of our bodies anyway. So let's put all the sola materialist assumptions in the box and see what we get; resurrection of the body, reincarnation, and life after death.

Wait, what? Is there an error somewhere? Was it perhaps the assumption that design optimisation can't involve evolution based on a simple algorithm and unlimited computational resources? (Cf planet earth from the hitch hikers guide to the galaxy). Maybe it was realism itself and we are living in a simulation? (Cf discreetness/quantisation of nature + indeterminism). Such would concord with the assumption that we are reasonable creatures; but it doesn't explain the source. I do think it therefore worth promoting open mindfulness. For the sake of science it is profitable to assume that all reality will ultimately be accessible to it - but should we be projecting this ideal as a philosophy?

The dependence of mental reality on a substrate (23 July 2017)

An argument for ontological materialism as pertaining to philosophy of mind (i.e. naturalistic physicalism);

1. Assume that there must be some substrate which defines a) when mind emerges (mental instantiation) and b) how mind operates (mental laws).
i) This needn't be the same substrate in both cases, and ii) we needn't have access to it (the substrate would operate perfectly according to its laws of nature regardless). What philosophical evidence do we have that i) it is the same one, and ii) we have access to it? Note the only substrate we have access to is the physical substrate.

ii) Why should a sentient being have access to the stuff (substrate) from which their mind arises (a) and which defines how mind operates (b)?
2. Mind (by definition?) requires access to an objective reality (operates on some sense data).
3. We infer that the substrate controlling how mind operates (b) is the physical substrate (brain), which we by definition have access to.
- Therefore we may well have access to the substrate which defines when mind emerges (a) also.
- And it may well be the same one (i).

i) Why should the substrate from which mind arises (a) and which defines how mind operates (b) be the same one? (evidence #2)
4. We infer that the substrate for the operation of mind (b) evolved according to the laws of nature.
- Therefore the substrate from which mind arises (a) may also have evolved according to laws of nature.
- And it may well be the same one (i).

[assumptions are enumerated]

Determinism (27 February 2018)

Let us assume that high level physical (empirically measurable) emergent properties exist (such as life, brains etc). These are reducible to patterns of cellular->chemical->atomic->quantum interactions given that a sufficiently advanced machine could compute the behaviour of the system given some initial conditions (along with a set of probabilistic outcomes to the wave function if the universe is intrinsically indeterministic at the quantum level). Their definition as emergent properties is therefore arbitrary from a nominalogical (as opposed to platonic) perspective; it is semantics. They (these high level terms; wave, life, brain etc) may be useful in terms of what the model can predict at a certain resolution of analysis, but they otherwise confer no new properties on the system. Thus;

1. A fundamental claim of naturalism/science is that any high level model cannot contradict what is observed at a higher resolution of analysis (ie in a lower level eg atomic model). Therefore, if a metaphysical libertarian (non-compatibilist) free will is true, then it must operate on either a) the intrinsically indeterministic quantum substrate (this is highly suspect of teleology in that why would the volition content of consciousness have any effect on the outcome of probabilistic quantum events) or b) classical Cartesian mind-body dualism (for all intensive purposes magic).

2. There is no reason for "awareness" (mental properties) to be assumed to be a necessary emergent property of the biological system (without additional argument) given that it unlike its chemical/biological/neurological/computational counterparts (physical emergent properties) cannot be reduced to a low level model (regardless of whether it follows the same eg deterministic rules). Mental properties are (by definition) not empirically measurable - an experiment cannot be conducted to detect their existence (hence the philosophical zombie or advanced robot that acts sentient but might not be). This is an example of an absolute ontological/categorical difference (unlike that of "life"). Likewise, there is a difference between an inference of a dependence of a on b (eg mind/brain) and the assumption that b necessitates a. So even if mind was assumed to be dependent on brain (the "effect" or "product" of brain where effect/product is a non-empirical, philosophical term in this context), there is no reason to assume that brains necessitate mind or that mind requires a brain (as opposed to some other simpler or more complex physical system like a rock or an advanced AI).

... Note I don't think that "there is no reason to assume that brains necessitate mind or that mind requires a brain" - only that the reasoning provided in the context of the argument here is not sufficient to demonstrate this ("without additional argument"). There likewise are reasons to believe this which have not been discussed here either.

"Awareness" (the existence of mental properties) is a fundamental problem irrespective of physical determinism. The only way it can serve an evolutionary purpose is to assume substance (interactionist/Cartesian) dualism, in which there exists a symbiotic coevolution of matter and mind. Yet this doesn't provide a substrate for mind (unlike the materialist monism physicalism), and pushes the mind-body problem back a layer into some higher dimensional space, or into some spiritual realm. There may be other, philosophical, purposes for subjective awareness, but the question of libertarian free will is purely a functional one (whether, why or how it conveys any difference/advantage on the physical system).

... this relates to our initial discussion. The philosophical definition of "awareness" (mental properties) here cannot be collapsed into its empirical (physical) definition. We typically infer an association between observed indicators of awareness (eg self-report) and awareness itself. This might be appropriate in other fields (eg cognitive science), but not when analysing philosophy of mind.

Science measures physical properties, and constructs models which explain their behaviour. In the context of cognition, it hypothesises psychological constructs described by observables. One such empirical construct is observed (aka physical) consciousness/awareness, and is identified by one or more traits; arousal, specific brain activity, self-report etc. The information processing construct of awareness (central processing of stimuli) is certainly advantageous to the organism and would have evolved accordingly. More advanced central processing constructs like self-concept and theory of mind confer additional advantage to the organism and likewise would have evolved. An example of one such advantage of higher order cognition (including the belief in mental properties) is the value the organism places on its survival and others of its own species that share this trait.

Here is a thought experiment to demonstrate the distinction between mental and physical (neurological) properties. An advanced carbon or silicon based organism could independently evolve a construct of "self" that exhibits awareness (including a belief in mental properties), but there is no obvious reason to assume that it has actual philosophical "awareness" (mental properties). It would function identically. More precisely, the "self" (software program representing the central processing of the creature's nervous system) would function as if it is aware, but if some low level science (typically taken to be physics) provides a complete description of the functioning of the universe then the existence of a subjective observer is irrelevant to its function.

In my previous comment I was purely discussing the prospect of an evolutionary advantage to mental properties (constituting the subjective observer's awareness). The only philosophy of mind in which such advantage can be afforded is substance dualism. Again, there may be philosophical purposes for such awareness independent of its function/lack of on observed reality (cf the anthropic principle). Likewise, as above, there are certainly evolutionary advantages to having evolved the information processing resembling such awareness. But under the current scientific paradigm (with the materialist monism physicalism) there is no physical process existent that cannot be explained by the laws of physics acting on non-sentient particles. The technical term for this anomaly is "overdetermination" - any mental laws or processes we conceive of are overdetermined by physical law. Mental reality is thus redundant.

The mind-body problem (interactionism and the undefined mental substrate for substance dualism, the strong emergence problem for physicalism, the combination problem for panpsychism, etc) and our perception of the causal relevance of our volition (free will) are often taken together. Many suggest that the solution to these may have a common ground (contemporary examples include invocations of quantum indeterminism - eg "free volition", interpretations of quantum mechanics - eg the Copenhagen interpretation/measurement problem, etc). Yet they may be entirely independent.

What reason does anyone have to do anything if not on logical grounds? And logic itself is deterministic. A failure to maintain consistency in a decision making process speaks more to a division of mind or the conflict of desires (eg short/long term evolutionary goals, the integration of one's first or second order theory of mind, etc). More often than not it is a failure to anticipate stimuli knowing its influence on our reptilian brain and prepare accordingly. It may even be a failure to take action when necessary, triggering an irreconcilable contradiction or a subconscious detection of genuine cowardice. Granted compatibilism cannot damn the sinner/despot, but who was responsible for feeding their evil? How consistent was their complacency or tolerance? How good should they feel today, and what will they do tomorrow?

... Just to clarify, I think there are a number of philosophical reasons to believe that an appearance of awareness (ie the observation of empirical awareness) is likely indicative of actual "awareness" (mental properties). This does not follow as being necessarily true (true of necessity) in the context of the argument here however.

The thought experiment is articulating the distinction between physical and mental physical properties, and how physical systems can be modelled to function/evolve perfectly based on the assumption of physical properties alone. This is precisely what is being implied here; "an aware observer is irrelevant to the function of the universe" (where awareness is taken to mean mental properties as opposed to the behavioural/neuronal exhibition of empirical awareness). The only philosophy I am aware of in which subjective awareness (mental properties) influences the function/behaviour of physical particles, besides that of substance dualism, and in a way that cannot otherwise be predicted by indeterministic law (eg "free volition"), is a particular variant of the Copenhagen interpretation in which subjective observation constitutes measurement.

It is true however that until we develop comprehensive theories of cognition (I won't reuse the psychological construct "theory of mind" here) based on research into "the neural correlates of consciousness" (coupled with the philosophical assumption of physicalism), we won't properly be able to predict which physical systems have emergent mental properties. Moreover, as we may never be able to be certain of the specificity of the association (eg. what if we replaced x neurons with a cybiotic implant; the cyborg might continue to say they are conscious but the truth of this assertion is unknown), there may forever remain gaps in our pseudo-empirical theory. Finally, we must recognise that there may be alternate philosophies of mind (eg panpsychism) which predict the existence of minds entirely unlike our own or that of the animal kingdom/mammalia class (eg what is it like to be a flat worm, bacterium, or virus). It is worth recognising that neural dependencies of consciousness are found in evolutionarily old systems (brainstem, thalamus, etc), not just the cerebral cortex.

Physical systems can exhibit properties which have a common source/explanation, however these are reducible to this source/explanation. Empirical "hot" (temperature) is equivalent to a particular spectrum (visible colour if hot enough) of emitted black body radiation (light) from a set of energetic atoms, and empirical "sharpness" (angle/gradient of curvature) of a given material is equivalent to a particular pressure upon contact. Temperature, colour, curvature, and pressure can be measured by self-report of participants (psychophysics; relying on an organism's cognitive map/neuronal representation of external stimuli), or more accurately by using some dedicated measuring device. But the feeling (including its quality or "qualia") of temperature, colour, curvature, or pressure is irrelevant to the experiment, and nor can such feelings be reduced to the physical properties being measured. What one person perceives as red might be blue to another (or absent entirely in the case of a philosophical zombie), yet their terminology ("red", "green", "blue", etc) can consistently and accurately map to the external (physical) stimuli (L/long, M/medium, S/short wavelengths). More precisely, assuming a common language, their terminology will map to their neuronal (physical) representation of this external stimuli; which is some function of the LGN (thalamus) cardinal colour space; L+M (luminance), L-M (reddish-greenish), S-L-M (bluish-yellowish).

It is worth noting that there exists a philosophy of mind which equates mental states with their cognitive function (functionalism). In the case of the blade analogy, a precisely distributed network of cells might have a physical property of colour sensitivity, they just won't necessarily have an additional redundant property of sentience. Likewise, a self-referential computer is a great tool for traversing complex predatorial and social environments, like meiosis is a great tool for managing mutations (nature.com/articles/nature14419), but there is no reason to assume it of necessity will have some additional unobservable properties emerge from its information processing architecture (cf "ghost in the machine"). Based on our personal experience (belief in our own mental properties; "I think therefore I am"), we infer that there is likely a mapping between physical and mental properties (that physical systems probably produce mental properties), but we can't use this inference as an explanation for its existence.

Morality

Why is human life worth more than animal life? (29 April 2017)

Is human life worth more than animal life? If so, why would human life be worth equal to a more intelligent, more sentient machine?

Another way to look at the problem is in terms of utilitarianism/deontology. Should we kill (or let be killed) a human being to save a more intelligent more sentient machine, or 500 puppy dogs? If not, why not?

Morality as action or will in accordance with truth (21 May 2017)

Morality is action or will in accordance with truth (logic). A proposition is either concordant with reality or it is not. A god cannot instil morality by a command (because a command could theoretically either conform with or contradict reality). Yet their a) existence or b) intention influence reality because they a) comprise and b) create reality respectively, thereby influencing morality. Furthermore, their a) existence or b) intention might influence our knowledge of reality, thereby influencing our knowledge of morality.

... Logic is a prerequisite of truth (something cannot be both true and illogical). Similarly, very few truths are known without application of logic. Those which are are known as axioms. Furthermore, the accordance of action and will to reality is arguably more dependent on logic than knowledge (given human frailty). Therefore for practical purposes (assuming human experience is fairly uniform) truth here can be taken as logic. As [X] points out however, the formal argument asserts that morality is action or will in logical accordance with truth.

... But it is not logical for said psychopath to expect their deficient experience of reality to correspond to reality.

... "So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets."

Is it just coincidental that cross cultural moral law (natural law) can be reduced to a golden rule? Does the moral law of "loving one's neighbour as oneself" or "loving one's creator" not predate its revelation? There is no logical reason to take one's own concerns above another's, all things being equal. Moral commands are just verifications of facts pertaining to logical actions and will.

"But to you who are listening I say: Love your enemies, do good to those who hate you, bless those who curse you, pray for those who mistreat you. If someone slaps you on one cheek, turn to them the other also. If someone takes your coat, do not withhold your shirt from them. Give to everyone who asks you, and if anyone takes what belongs to you, do not demand it back. Do to others as you would have them do to you."

Morality as action or will in accordance with truth (12 June 2017)

Morality is action or will in accordance with truth. Immorality is logically contradictory behaviour, such as treating another how one would not wish to be treated oneself. If both persons have a shared humanity (and all things being equal), then it makes no sense to treat another different than one would have themselves be treated by others. If one is going to respect the authenticity of their own dignity or desires, why not another's.

Such a definition doesn't define moral behaviour in all circumstances (because it depends on assumptions regarding expectations of treatment by others, and these are dependent on concepts of natural law, universal order etc - take for example the trolley problem), but I think it is the principle upon which all morality is based. Pretty much all religious morality derives from the principle (although it often introduces another agent into the equation).

... Morality is by definition absolute in any given circumstance (there is a right and wrong), but "moral absolutes" generally apply across time/culture so their existence is questioned. There appear to be some moral absolutes (such as not murdering or exploiting people), but I would argue that these are artefacts of a more general principle.

"I'm sure this goes against everything you've been taught, but right and wrong do exist. Just because you don't know what the right answer is - maybe there's even no way you could know what the right answer is - doesn't make your answer right or even okay. It's much simpler than that. It's just plain wrong."

... This framework also allows for two kinds of immoral action; actions which contradict reality but are not known to contradict reality, and actions which contradict reality and are known to contradict reality (logical fallacies/compromise). I would classify the second as evil, and the first more generally as mistakes. Note this is probably where the concepts of mortal and venial sin arise (or say conscious evil versus unconscious evil). Of course if one does not like to use religiously loaded words they can replace them with arbitrary symbols; but the concepts won't go away.

Regarding (moral) "absolutes";
Obi-Wan Kenobi: You have allowed this dark lord to twist your mind, until now, until now you've become the very thing you swore to destroy.
Anakin Skywalker: Don't lecture me, Obi-Wan! I see through the lies of the Jedi. I do not fear the dark side as you do. I have brought peace, freedom, justice, and security to my new Empire.
Obi-Wan Kenobi: Your new Empire?
Anakin Skywalker: Don't make me kill you.
Obi-Wan Kenobi: Anakin, my allegiance is to the Republic, to democracy.
Anakin Skywalker: If you're not with me, then you're my enemy.
Obi-Wan: Only a Sith deals in absolutes. [draws his lightsaber]

Regarding absolute morality/relativism;
Obi-Wan: I have failed you, Anakin. I have failed you.
Anakin Skywalker: I should have known the Jedi were plotting to take over!
Obi-Wan: Anakin, Chancellor Palpatine is evil!
Anakin Skywalker: From my point of view, the Jedi are evil!
Obi-Wan: Well, then you are lost!
Anakin Skywalker: [raises his lightsaber] This is the end for you, my master.

Morality as action or will in accordance with truth (14 January 2018)

To be moral is to be logically consistent, and to be immoral is to be logically inconsistent. The reason A1 acts in their own interest is because they value themselves, but if all like sentient entities (A2, A3, A4, etc) are equally valuable, then there is no reason for A1 to treat others different than how A1 would him/herself expect to be treated by them (the golden rule). Hence a) morality is objective not relative. There is no such thing as moral evolution; only changes in knowledge, circumstance and technology. Yet b) there are no "moral absolutes". Even taking/killing/using can be justified if the perpetrator would expect to be treated that way if the situation were reversed (e.g. provocation). Traditionally we assign these actions as theft/murder/rape etc when the condition of logical inconsistency holds. Who is to be the final arbitrator in the matter however? Only someone with absolute knowledge of all parties can ever know what is universally moral (preferable), but just because we don't know or can never know what is wrong doesn't make it right. It is just plain wrong.

Likewise, moral imperatives (oughts) only exist to the extent that logical consistency exists. We only have reason to be moral to the extent that we have reason to be rational. If someone chooses to act irrationally however there is no reason for anybody to trust them (which is the traditional prelude to ostracism). Furthermore, for someone to disrespect reason is to signify disrespect for nature: the force that conferred on us the Reason we take for granted as true in the declaration of any logical proposition (including the inference that there is a probability our universe evolved to produce reasonable creatures). And for a contemplative teleosian this is to disrespect creation (or another agent in the equation). This provides a metaphysical dimension to morality, often mistaken for its philosophical basis. In reality however, we can easily conceive of an immoral god.

Finally, there is reason to differentiate conscious from unconscious inconsistency in human relations; for which the terms evil and wrong (missing the mark aka sin) are generally associated. This has influence on meta judgement or conscience. Even without libertarian free will, we can still accept the justice of social consequences (punishment) for rejection of truth, because all such rejection coincides with some fleeting affective pleasure which would not exist otherwise. We instinctively reject the coherency of moral error in the face/light of truth and feel guilt as a consequence (cognitive dissonance), needing to make amends or get one's honour back.

... The logical imposition referenced only follows precisely for like entities. This is a qualifier. Any system of ethics suffers from the same constraint; how does its implementation account for different grades of sentience - see thread "Why is human life worth more than animal life?". Although there may be an ontological difference between non-sentience and unique/sole instances of sentience, we don't treat all such entities alike (there are 2 empirical factors which determine this treatment besides genetic proximity and reciprocal altruism; the sensitivity of the subject and the morality of the subject). Yet this is the only principle available to derive right treatment of differences (putting ourselves in another's shoes) - despite all such inferences being imperfect due to a lack of absolute knowledge of the complete/entire system.

If two entities are identical then they are equally valuable by any standard of value (because they are interchangeable).

The golden rule is not claiming an objective standard, it is a logical (moral) principle/maxim which is practically constrained by imperfect knowledge (who are the active/affected parties, no 2 entities will ever be identical so how should these differences/gradations be weighted). Assuming perfect self/universal knowledge of all actors, the principle could however be theoretically applied to derive the objective good; thus it is a demonstration of why morality is logical consistency. ---The golden rule is not a specifically Christian ethic, it predates Christianity and is shared by a number of religions; the New Testament reference comes from the Torah.

... Someone with sufficient knowledge of a person would be inclined not to dehumanise them (unless they were out of their mind). Important here is the distinction between conscious and unconscious error. Although a master mind contemplating mass extermination of a people is arguably being knowingly inconsistent (compromise with respect to the logic inherent in the golden rule), his persuaded/propagandised minions are not necessarily - they might think they are being consistent but only being in error with respect to the true nature of the facts.

... This is an epistemological not an ontological limitation.

Copyright © 2017, Richard Baxter