curious notes

Does functionalism say I can have bat mental states?

I have become increasingly puzzled about why arguments like Nagel's famous, "What Is It Like to Be a Bat?", are taken to tell against functionalism about the mind.1 This seems to me basically confused about what the commitments of functionalism are. In a nutshell, it seems to assume that functionalism says you can put a human mind into a bat-mind mental state -- but this is actually ruled out by functionalist definitions of mental state. The arguments thus tax the functionalist for having a commitment they should already reject for reasons internal to (indeed central to) functionalism. I'll explain my thinking below. I'm aware that I am leaving out a lot of possible dialectical elaborations, some of which may be genuinely important. I may or may not get to them at some point. At this stage I just want to regiment in a clear way my puzzlement about the influence of these arguments. I would love to know what the existing literature has to say about these issues -- I have not encountered quite this line of thinking.

Nagel's "What Is It Like to Be a Bat?" presents a schema for an argument against functionalism about the mind. The structure of the argument is as follows.

  1. A human can't be put into the feeling state of a bat using its sonar by learning a complete causal-functional characterization of the bat's feeling state.
  2. Since functionalism holds that a bat's feeling states just are causal-functional states, if functionalism is true, a human can be put into a bat's feeling state by learning a complete causal-functional characterization of that state.
  3. Therefore, functionalism is false.2

Some replies to Nagel reject premise 1 (see notably Dennett), while others reject premise 2. I will argue that functionalists should accept premise 1 -- premise 1 is in fact entailed by general principles of functionalism. Premise 2 should then be rejected for reasons internal to functionalism. The argument against functionalism fails as unsound.

The functionalist argument against premise 1 is as follows. For the functionalist, feeling states are causal-functional states of the overall feeling system. Each such state is individuated by its place in the overall causal-functional network connecting all such states based on how they relate to each other and to the outward behaviors of the system. Notably, on this picture, no feeling state has any essential or intrinsic character -- any state's character is exhausted by its place in the overall network. For example, consider two distinct feelings of a bat, say the feeling it has when its sonar returns from pinging a crowded cave (Feeling 1), and the feeling it has when it drinks water without emitting or receiving any sonar pings (Feeling 2). It is senseless to imagine that Feeling 1 and Feeling 2 could swap places in the bat's causal-functional state network. Everything that individuates Feeling 1 is a feature of its place in that network -- any "other" box in the same place in that flow-chart would be the same feeling. Once you specify all its relational properties, there's nothing more to say about it -- the box is empty. (This is in fact the consequence of functionalism that "What It's Like" thought experiments are supposed to disprove.) Accordingly, to swap the boxes labeled Feeling 1 and Feeling 2 in the network is to not change anything at all -- qua boxes, they are the same. Perhaps I have belabored this point but it seems to me that it is not fully appreciated by most people, even as it is at one level just the most basic commitment of functionalism. Anyway, the general principle is that, according to functionalism, two states are the same if and only if they have the same place in the causal-functional network. It follows from this that, if two feeling states do not have the same place in the causal-functional network, they are not the same feeling state. Since humans and bats are behaviorally and internally very different, there is no causal-functional state of the human system that is the same as any causal-functional state of the bat system. Accordingly, no human feeling state is the same as any bat feeling state. Because there is no human feeling state that is the same as the feeling of a bat getting a sonar pingback, a human cannot be put into that state. A fortiori, a human cannot be put into that state by learning a complete causal-functional characterization of it. QED, that premise 1 can be derived straightforwardly from the most basic commitments of functionalism.3

The rejection of premise 2 also follows immediately. The only mystery is why the opponent thinks the functionalist should be committed to premise 2 in the first place.4

Here is an analogy. I am a functionalist about computer program execution states. That is, I think a computer program execution state is defined by its role in the causal-functional network of computer behavioral states of the computer it is running on. MS Word running with this text in the buffer is individuated by a long list of facts like, "If I type this sentence in here, it will appear in context in the expected way in the buffer," and so on. (I'm not sure but I think it would actually be nuts not to be a functionalist about computer program execution states.) But now here comes Nagel and he goes, if you're a functionalist about computer program execution states, then you should think that an Apple II can instantiate the execution states of running MS Word on your Windows 10 PC. All you would have to do is describe in suitable terms the causal-functional profile of MS Word on your Windows 10 PC to your Apple II. But you can't do that -- your Apple II can't instantiate the same program execution states as your Windows 10 PC can. For example, it only has 16-color graphics. Word on Windows 10 probably uses the 3D accelerator lol. In fact, you can't even in principle emulate Word on Windows 10 on the Apple II, because of memory limitations -- the Apple II state space is not large enough to accommodate the things you can in principle do in MS Word on Windows 10. But even if you could emulate it, your Apple II would not be in the same program execution states as your Windows 10 PC was, because unlike your Windows 10 PC it would be emulating a Windows 10 PC. Ok but so if Nagel's argument applies with equal force to functionalism about computer programs, which seem like the perfect candidates for functionalist characterization, then it is hard to see how it can be very telling against functionalism about the mind.

  1. I think these arguments are often presented as against "physicalism" or "materialism," an umbrella that may or may not include functionalism. But the application to functionalism is very clean and, in my opinion, very cleanly reveals the weakness of these arguments.

  2. The premises can be put in terms of knowledge -- that if functionalism is true a human can know what a bat's feeling state is like by learning its causal-functional characterization. But the notion of "knowledge" that's being used here is experiential -- to know what a feeling state is like in this sense is to have experiential access to the state or to a suitably analogous feeling state. Thus instead of using the (in this context theoretically unclear) notion of knowledge, we could if we liked restate the premises in terms of a human accessing a suitable analog to a bat feeling state.

  3. The biggest objection to this argument is obviously that it proves too much. Because it is implicitly maximally fine-graining the identity conditions of functional states, it presumably shows that two people can't be in the same feeling states, either, and maybe even that a person can never be in the same feeling state twice. The opponent will then assert that the functionalist has to commit to a non-maximally-fine-grained identity relation between feeling states, on pain of absurdity, which blocks the argument I have used against premise 1. Against this, the basic idea, which I'm not going to pursue right now, is to say that functionalism only answers the identity question for maximally fine-grained functional states. Any coarser-grained identity relation is really a matter of convention rather than metaphysics -- we adopt identity relations that suit our pragmatic purposes, and can switch between them as our purposes change. This fluidity is empirically very familiar -- when we sympathize with someone, we say we know their feeling, but we can also always fine-grain and find differences between our feelings. Every time I see a stop sign actually is I think in some way unique, but normally I disregard what makes it unique. Etc. The similarities that are grouped together by a particular coarse-graining of state identity are, for the functionalist, ultimately functional similarities. But this is a light and comfortable commitment. In this framework, we can also accommodate arguments like Dennett's that a human actually can know what it's like to be a bat, by close enough study of a bat. We'd just pick out a suitable coarse-graining in which certain kinds of empathetic imagination are "close enough" to the feelings they (in a suitable sense accurately) imagine.

  4. I think the best solution to this mystery goes through the discussion of fine-graining, see note above.