• Home
  • About
    • Edward Bottom photo

      Edward Bottom

    • Learn More
    • Email
    • Facebook
    • LinkedIn
    • Instagram
    • Github
  • Posts
    • All Posts
    • All Tags
  • Projects

Defining Mental States Mechanically

11 Mar 2018

Reading time ~9 minutes

Defining Mental States Mechanically: An Analysis of Machine Functionalism, Causal-Theoretical Functionalism, and the Implications of Solipsism

I believe that Putnam’s argument for functionalism and more specifically, machine functionalism is the most plausible theory of the mind we examined in class. I will begin this paper by introducing Putnam’s proof for the multiple realizability of pain and how Turing machines can accurately model the multiply realizable functions of mental states. I will then address a common counter argument, the rejection of Putnam’s first premise that the same pain can exist in two separate organisms and then compare machine functionalism to its successor, causal theoretical functionalism and analyze how they differ in their approach to countering the rejection of Putnam’s proof. I will conclude by embracing the holistic nature of Turing machines and the “teleological appropriateness” of modeling mental states as states on a Turing machine to shed light on the dark reality of solipsism, the belief that no two people can share the same mental states.

The cornerstone of functionalism and as a result, machine functionalism is the idea that mental states are multiply realizable, second order properties and cannot be defined by structural properties. Putnam uses pain as an example to prove this, arguing that both humans and octopi feel pain and that humans and octopi have two different neurological systems. Thus, pain and by default other mental states are multiply realizable given that pain exists in both octopi and humans, despite them having two different neurological systems (130).

Machine functionalism generalizes Putnam’s argument, arguing that minds are just computational machines or Turing machines and that mental states are best represented by states on a Turing machine. Machine functionalists define mental states as mathematical states or functions of other states, because according the Church-Turing thesis, a Turing machine can perform any purely mechanical function. Thus, the mind itself is functionally modeled by a machine table that sets the states of a Turing machine from given inputs in the same way the mind sets mental states through input from the external world. Because Turing machines are completely functional, they align with Putnam’s argument that mental states are the same as states in a Turing machine as their states are multiply realizable, independent of structure, and defined functionally in terms of other states.

To better illustrate the realization of the mind as a machine table, consider two different computers running the same software. Despite differences in their hardware, the computers can run the exact same software. Machine functionalism holds a similar view about the mind and the brain in that the mental states of the mind are the software that is physically realized in the brain. In the case of Putnam’s proof, the mental state of pain is realized in the anatomy of both octopi and humans and can be generalized to any number of different anatomies and organisms as the state is defined by its function and not structure.

Critics of machine functionalism reject the multiple realizability of pain by rejecting the first part of Putman’s argument, that octopi and humans both feel the same pain. It is difficult to quantify pain and even based on the similar reaction of both octopi and humans to pain, that does not necessarily mean they experience the same pain. Even if it did, reactions to the same pain stimulus can often be different. For example, consider two people both step on a nail and that nail penetrates their foot in the same spot, thus causing the same painful sensation. One responds by screaming profanity and the other says nothing but precedes to treat the wound. By the functional definition of pain, two people would need to react in the same way to be experiencing the same pain. However, they reacted in different ways to the same input and thus are not experiencing the same pain despite experiencing the same input. This example rejects Putnam’s first premise, and thus means that pain may not be multiply realizable.

This is problematic for machine functionalism because Turing machines define states holistically using other mental states and thus if two mental states for the same stimulus differ, which it is given they will by the example of stepping on a nail, all the mental states of two people are different. This implies solipsism, that no two people can have the same mental state because they have different machine tables that define every state differently. Functionalists are then left with two options: keeping machine functionalism and accepting the bleak reality of solipsism, that we are trapped in our own minds, unable to confirm the existence of or understanding of anyone else’s mind or mental states or abandoning machine functionalism for causal-theoretical functionalism.

The later of the two options, causal-theoretical functionalism seeks to preserve the general ideas of functionalism put forth by machine functionalism without implying solipsism. Causal-theoretical functionalism and machine functionalism both rely on Putnam’s proof of the multiple realizability of pain to validate their main ideas, hinging their beliefs on the idea that mental states are defined by their second order functional properties. Both reject the idea of structural properties defining mental states and seek to define them in physically realizable, functionally similar, but structurally variable means and as a result are both troubled by the rejection of the first premise of Putnam’s proof.

Machine functionalism seeks to define mental states in a purely mechanical way, drawing support from the Church-Turing thesis. In contrast, causal-theoretical functionalism seeks to solve the problem of solipsism and revalidate Putnam’s proof by abstracting away the individual in machine functionalism and focusing on theoretically defining the functions of mental states. Rather than creating distinct machine tables for every individual mind, the cause of solipsism in machine functionalism, causal-theoretical functionalism seeks to provide complete causal theories of mental states using only physical inputs and behavioral outputs. These principles seek to define mental states apart from each other based on observable behavior, unlike Turing machines which generalize mental states to the individual, defining mental states in terms of other mental states.

Causal theoretical functionalism allows for two people to have the same mental state by creating an ideal, purely physical, and behaviorally defined definition of a mental state. By addressing the theoretical and not individual definitions of mental states, causal theoretical functionalism avoids the individual cases of pain addressed by the example of stepping on a nail, by defining pain in a way that encompasses the reactions of both individuals without defining the state in terms of other mental states. Rather causal-theoretical functionalism seeks to define mental states apart from each other based on purely physical definitions, thus revalidating Putnam’s first premise that octopi and humans feel the same pain, while avoiding the implications of solipsism caused by the holistic nature of Turing machines.

Machine functionalism is not able to avoid the rejection of premise one through abstraction because machine functionalism generalizes machine states to the mental states of individuals, by defining the states in terms of other states. Thus, machine functionalists are forced to accept solipsism to revalidate Putnam’s proof. This implies we are trapped in our own minds, but is unavoidable considering the holistic nature of Turing machines means having the same mental states would entail having the same machine table and thus the same mind, which is not possible. The embrace of solipsism is thus necessary to validate Putnam’s proof and revalidate machine functionalism.

Jaegwon Kim argues against the challenges posed by solipsism by arguing that both humans and octopi experience essentially the same pain, and it is not necessary for total human psychology and total octopi psychology to coincide to prove machine functionalism (154). Rather, it is sufficient for “there to be some Turing machine that is a correct description of both states and in which pain exists as an internal machine state” (155). Critics of machine functionalism are overlooking the individual success in modeling similar mental states of individuals and how Turing machines can build off existing states to create new states in much the same way humans build off their previous experiences.

The idea that a Turing machine capable of modeling all mental states needing to exist to prove the validity of machine functionalism is an overgeneralization of machine functionalism. Instead, a means of modeling a mental state such as pain so that it functionally models the mental state is more than sufficient. The critics of machine functionalism are over generalizing the existence of mental states in Turing machines and are forgetting that the individual machines are not meant to represent the “total psychologies” of mental states (155). Octopi are not capable of vocalization, so it is not logical to assume that because octopi cannot moan or scream as a reaction to pain, they do not feel the same pain or use the same or similar states on Turing machines as humans. Both octopi and humans react to pain in a way that benefits their survival and seeks a means to an end of pain and thus are essentially reacting in the same, “teleologically appropriate” way (159). Kim argues that this makes human’s and octopi’s pain enough alike to be generalized to similar states on a Turing machine, while still accepting the reality of solipsism.

Teleological appropriateness can be applied to the example of stepping on a nail used earlier to reject machine functionalism. The two humans gave appropriate reactions to pain in their given environments and experienced essentially the same pain. This application rationalizes the bleakness of solipsism in that humans can have essentially the same state and functional realization of the state without having the exact same state. Although we may be trapped in our own heads, we can still have similar mental states to others and do not exist entirely apart from the external world as we still feel similar “teleologically appropriate” mental states, just not identical mental states.

Solipsism is an unavoidable part of the holistic nature of Turing machines and the application of second order functional properties to individual mental states. One can avoid solipsism while still holding onto functionalism by accepting causal-theoretical functionalism, but to define mental states of an individual, solipsism is necessary. If two people were to have the same mental state, they would have to have the exact same machine table and as a result all the same beliefs and experiences. Such an example is impossible, so saying that two people have the same mental state is not accurate because mental states define other mental states and every experience a person has shapes their beliefs and mental states. Thus, saying two people have the same mental state is akin to saying two people have all the same experiences and the same holistic mind, which is not possible.

Work Cited Kim, Jaegwon. Philosophy of Min. 3rd ed., Westview Press, 2011.



Share Tweet +1