The logical hallucination of information-as-data-processed-by-the-mind

Ordinarily, information is facts provided or learned about something or someone. But this definition doesn’t quite work if we’re thinking of the mind as an information processing system— how does the mind know what counts as a fact? How can a ‘mechanical’ system distinguish between what is a signal and what is noise? To solve this problem, we hallucinate a mysterious notion of information

In this model of the human mind as an information processing system, what does information actually mean? We might think that we can define information here as data. Although we might now imagine a series of 1s and 0s or statistical data, that doesn’t really clear anything up. Ordinarily, information is facts provided or learned about something or someone, how can a deterministic ‘mechanical’ system distinguish between what is actual data/information, what is signal and what is noise?

In his paper, A Mathematical Theory of Communication, Claude Shannon laid the basis for the modern idea of information that we use in computer processing. What he actually came up with was an ingenious distinction between noise and information in terms of how likely the signal was to occur, so the amount of information contained within a signal is inversely proportional to the probability that it occurred. The less likely it was to occur, the more information it contained. Mathematically, he expressed this as I(x)=-log2p(x), where x is a discrete random variable, and I(x) is the amount of information contained in x and p(x) is the probability of x occurring. Shannon’s sense of information became ubiquitous with the rise of computers.

It should be clear, that this definition of information bears little relation to the everyday concept that we are referring to when we say, ‘all the information that you need is on the sheet’, for example. Shannon had set out to find a way to solve engineering problems involving communication over a noisy channel. He did not set out to deal with questions of meaning or interpretation, which his theory, in his words, ‘can’t and wasn’t intended to address’.

People often compare information being inputted into a computer to information entering the brain via the senses -a teacher telling a student something is simply a much more complicated version of someone typing words into a computer. —But a person does not communicate with a computer in the same sense that two people communicate with each other. To mean something with one’s words is to intend for something to be understood by them. We might communicate via a computer, but we don’t expect computers to understand us. In ordinary language, the ability to communicate, to use language (and to understand it) is what we would call a volitional power (see the previous section Freedom and Powers). A volitional power is a two-way power, something that someone can do but can also refrain from doing. My mug has the power to weigh-down the papers on my desk on a windy day, but it can’t refrain from doing so. If we don’t have the ability to not communicate something or behave in a particular way (i.e. if we only have a one-way power) then we cannot mean anything with what we do.

Computers don’t have any volitional powers —they can only act as they do. And so the knowledge we acquire cannot be reduced to information- at least not in the sense of data stored by a computer.

In fairness to many Cognitive Scientists, the mind is normally modelled, not as a computer but as a natural information processing system, comparing human cognition with evolution. In biology, the concept of information is used with reference to the genome. Genomes are strings of symbols, and it is quite natural to see those symbols in terms of the standard information theory that originated with Shannon, and for some, this was the assumed definition within biology. However, others have argued that this is not a rich enough conception of information for biological purposes.

The Shannon conception of information only defines the information encoded in the genome in terms of their deviation from the random expectation, but this doesn’t seem to take into account any of the teleonomic aspects of biological information.

The biologist Colin Pittendrigh made a distinction between teleology and teleonomy: something is teleological when it has been created with a particular purpose in mind. Something is teleonomic when it appears to have purpose but cannot be said to have been created for it. Pittendrigh coined the term in the fifties to describe the apparent end-directedness of biological systems without the need to invoke a designer or foresight.

In this way, biological information appears to have a function or purpose derived from a history of natural selection. Koonin, for example, argues, roughly speaking, that the concept of information in biology should be seen in terms of the difference made by a variation between two homologous nucleotide or amino acid sequences. Koonin calls this the meaning contained in the genome as opposed to the mere information that Shannon was describing.

However, a sequence of a genome only has meaning in the sense that it has a consequence or result. Darwin explained definitively how, given that the process of natural selection favours advantageous adaptation, there can be purpose in nature without a designer. Thus, unless one wants to invoke a creator God, there is no decision-making entity involved, only the complex reactions and interactions of chemicals, and a reaction is not a choice. The chemicals cannot refrain from reacting, and thus, this ability to react is not a two-way power.

So however we want to bake this notion of information, we cannot give an account of the mind in terms which can account for intention. Whilst an algorithm, a deterministic system —biological or otherwise— can produce a meaningful phrase in the sense that it can be a phrase with a recogniseable use, etc., such a system cannot mean anything in the sense that it cannot intend that use. It cannot intend to convey anything. Thus, explanations in terms of such systems will always be of limited use. Evolution will only possess one-way powers. Evolution cannot intend to do anything, and as a result it makes no sense to blame, be angry with or punish evolution (except perhaps poetically). As a result, concepts like communication, language, information and knowledge, come mean something quite different to what we refer to in ordinary language. All the meaning is removed from their account and replaced with mere consequences. (It is not difficult to see how this might have grave consequences for how we conceive of education.)

Of course we require brains and biology to do anything, but that doesn’t mean we should reject top-down explanations.

Leave a comment