Intro to Philosophy – Week 3 – Philosophy of the Mind

  • Cartesian dualism: the body is made of material stuff (i.e., stuff that has “extension” (i.e., takes up space)) and the mind is made of immaterial stuff (i.e., does not have extension)
  • Princess Elizabeth of Bohemia was a student of Decartes who brought up the following problem: how can an immaterial mind affect a material body? Our thoughts cause us to do things, but how does the immaterial interact with the material?
  • Another problem is how does the ingestion of a material substance (e.g., psychoactive drugs) affect an immaterial mind (i.e., hallucinations)?
  • Physicalism = “all that exists is physical stuff”
  • Identity theory = one view of physicalism in which “mental phenomena, like thoughts and emotions, etc. are identical with certain physical phenomena”
    • e.g., the mental state of “pain” is identical to a particular type of nerve cell firing
    • a reductionist view – i.e., reduces mental states to physical processes
    • token = instances of a certain type (e.g., Fido and Patches are two tokens of the type “Basset hound)
    • token identity theory = instances of mental phenomena (e.g., a particular pain that I am feeling) is identical to a particular physical state that I’m in
    • type-type identity theory = types of mental phenomena (like “pain” or “sadness”) are identical to types of physical phenomena (e.g., a particular cocktail of neurotransmitters, hormones, etc.)
      • type identity theory is a stronger claim than token identity theory
  • problem with type-type identity theory:
    • a human, an octopus, and an alien can all feel pain, but have very different brain states
    • Hilary Putnam raised this issue of “multiply realisability” in 1967 –  the same mental state can be “realized” from different physical states
    • similarly – currency can be coins & paper in one place, but shells in another place – so currency is “multiply realisable”. It doesn’t matter what they are made of – what matters is how they function.
  • Functionalism = “we should identify mental states not by what they’re made of, but  by what they  do. And what mental states do is they are caused  by sensory stimuli and  current mental states and cause behaviour and new mental states”
    • e.g. the smell of chocolate (a sensory stimulus) causes a desire for chocolate (mental state) may cause the thought (another mental state) “where is my coat?” and the behaviours of putting on coat and going to the store; but if I have a belief that there is chocolate in the fridge, the desire for chocolate could lead to the behaviour of getting the chocolate out of the fridge
    • functionalism gets away from the question of “what are mental states made of?” and instead focuses on what mental states do
  • philosophers often use the computer as a metaphor for mind – a computer is an information processing machine and it doesn’t matter what it’s made of, it only matters what it does
  • this is a computational view of the mind
  • Turing Test – you ask an entity questions and you don’t know if you are talking to a person or a computer. If we can build a computer that can fool the person asking questions into thinking they are human, we have built a computer that is sufficiently complex to say that it can “think” or it has a “mind”
    • some problems with the Turing test:
      • it’s language-based, so a being that can’t use our language couldn’t pass it
      • it’s too anthropocentric – what about animal intelligence? or aliens
      • does not take into account for the inner states of a machine – e.g., a machine that is doing a calculation of 8 + 2 = 10 is going through a process, but a machine that just has a huge database of files and just pulls the answer 10 out of its “8 + 2” file – we wouldn’t want to say that it is “thinking
  • John Searle’s Chinese Room Thought Experiment
    • You are in a room where you get slips of paper with symbols on them delivered to you through an “input” hole in the wall and you have a book that tells you what symbols to write in response to those symbols which you write down on a slip of paper and pass through the “output” slot in the wall. As it turns out, the symbols are Chinese characters and the book is written in such a way that you are giving intelligent answers to the person sending the questions to you. When they receive your “answers”, they are convinced you are a being with a mind that is answering their questions – but you have no idea that it’s questions and no idea what you are responding because you cannot read nor write in Chinese. This is how computers work – they get an input, they are programmed with a list of rules to produce a certain output. But we don’t say that they computer is “thinking” and more than the person in the room understands Chinese. There is no understanding going on within a computer – it doesn’t have a “mind” and if it passes the Turing test, it’s just a really good simulation.
    • syntactic properties = physical properties, e.g., shape
    • semantic properties = what the symbols means/represents
    • a computer only operates based on syntactic properties – it is programmed to responded to the syntactic property of a given symbol with a given response – it does not “understand” its semantic properties
    • aboutness of thought – thoughts are “about” something – they have meaning
  • some problems with the computational view of the mind
    • doesn’t allow us to understand how we can get “aboutness of thought”
    • the “gaping hole of consciousness”
    • the hard problem of consciousness: what makes some lumps of matter have consciousness and others don’t have consciousness?
  • a lot of philosophers were writing when computers were becoming a big deal, so perhaps their thinking was limited by thinking of minds as computers – perhaps we should step away from computational analysis as a metaphor for the mind because it’s limiting our thinking?

 

Follow-up discussion

  • most philosophers use the phrase “intentionality”, which the prof of this session avoided when she talked about “aboutness of thought” because it comes with a lot of philosophical “baggage” that she didn’t want to get into
  • in the discussion forum of the class, people were asking things like “do animals have mind? and how could we know if animals have mind?”
    • one school of philosophy says that you need to have language to have thoughts and since animals don’t have language (as far as we know), they don’t have thoughts
    • but others don’t think this is a fair argument – e.g., if a dog is barking at a squirrel a tree, just because it might not have as “rich” a concept of squirrel as humans do (e.g., a squirrel is a mammal with a bushy tail etc.), we can still infer from its behaviour that it is “thinking” something we can roughly describe as “the dog thinks there’s a squirrel in the tree”
    • she suggests checking out Peter Carrurthers’ work on the animal mind for more information
  • someone in the discussion said that the Turing test doesn’t test if a machine is conscious, but rather it tests at what point humans are willing to attribute conscious states to other things (similar, at what point do infants start to think of other people as having a consciousness?)

 

This entry was posted in notes, online module notes, philosophy and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published.