Peter Achinstein and Explaining As An Activity

Filed under: — Bravus @ 5:58 pm

Achinstein notes that most accounts of scientific explanation have focused on the ‘product’ – the explanation itself, whether spoken or written – rather than on the act of explaining. He sets out to analyse explanation from the perspective of what human beings are doing when we explain.

An explanation is given by someone, with the purpose of helping someone else to understand. Achinstein explains it in slightly more technical language, but in brief he says that the purpose of an explanation is to have the audience know the correct answer to a question and know that it is a correct answer. We’ll leave aside the kinds of questions for which there is no correct answer, or many correct answers.

Achinstein describes explaining as an ‘illocutionary’ act. This is from a framework by Austin. Wikipedia sez: “In Austin’s framework, locution is what was said, illocution is what was meant, and perlocution is what happened as a result.”

Achinstein notes that the exact same sentence can be said with different intentions. An example he uses (I’ll paraphrase somewhat) is that when Dr Jones says “Bill ate spoiled meat”, he is giving an explanation of Bill’s stomach ache, and therefore the kind of illocutionary act he is undertaking is ‘explanation’. When Bill’s wife Jane says “Bill ate spoiled meat”, she is criticizing Bill’s dietary choices, so she is undertaking an illucutionary act of the kind ‘criticism’. This is true even though both people said the exact same words.

Achinstein suggests an ‘ordered pair’ approach, which can be described as (p, explaining q). ‘p’ is the explanation product itself – a sentence or proposition, and the second part of the brackets clarifies that someone said (or wrote) p in order to explain something, ‘q’. Dr Jones’ response might then be written as (“The reason that Bill has a stomach ache is that Bill ate spoiled meat”, explaining why Bill had a stomach ache).

By identifying what is going on in the explaining process, the explanation ‘product’ is clearer.

He considers the issue of evaluating explanations: a correct explanation may not be a good explanation in general terms, or it may not be a good explanation for a particular audience or a particular purpose. Achinstein talks about ‘instructions’ for explaining in a particular context.

Achinstein proposes the following criteria for the good-ness of an explanation:

  1. The audience does not already understand it
  2. There is a way to explain it that will allow the audience to know the correct answer and that it is a correct answer
  3. The audience is interested in the explanation
  4. It will be valuable for the audience to understand the explanation

There are a lot more details and issues, but the two key takeaways for me are (1) this approach is closer to my concerns with science teaching explanations than those of Hempel and Salmon because it centrally includes the explainer and the audience and (2) the challenges of teaching are with ensuring conditions (c) and (d) above – that our students are interested in the explanations we offer, and that the explanations we offer will be valuable for our students.

Note that (d) is not ‘the audience knows that it will be valuable to understand’. While that’s desirable, it is not essential, as long as the explainer knows it. But I would argue that it must be authentically in the interests of the audience (students, learners) to understand the explanation if we are to justify teaching it, and ‘valuable’ needs to mean something much more than passing an exam. The explanations we give in science teaching should transform worldviews and offer tangible benefits.


Eine Kleine Achinstein

Filed under: — Bravus @ 5:54 pm

Just a little taste for you of the kind of stuff I’m reading at the moment. The sauv blanc helps, at least in moderation. 😉

If Q is an explanation-seeking question (e.g. ‘Why did Nero fiddle?’), and q is the indirect form of the question (e.g. ‘The reason that Nero fiddled is that______’), and if a person A is seeking to understand q, and if qI is the answer to q under a specific set of instructions, I (so, for example, it might be ‘Explain why Nero fiddled in terms of his mental state’ or ‘Explain why Nero fiddled in terms of historical factors obtaining in Rome at the time…’ and so on), then:

A understands qI only if (∃p)(p is an answer to Q that satisfies I, and A knows of p that it is a correct answer to Q, and p is a complete content-giving proposition with respect to Q). (Achinstein, 1983, p. 57)

∃ is the ‘existential quantifier, which means ‘there exists’, so ∃p means ‘there exists a proposition p such that…’

A ‘complete content-giving proposition’ is complex, but basically it means it contains everything relevant and nothing irrelevant to explaining Q.

Wesley Salmon, Statistical Relevance and Causal/Mechanical Explanation

Filed under: — Bravus @ 2:49 pm

As you’ll know if you’ve been following along in this series of posts1 on the philosophy of explanation, or if you decide to go back and read them in chronological order before continuing to read this one, Wesley Salmon is a realist who has been working on the problems of explanation for some considerable time. He first advanced and then withdrew a ‘statistical-relevance (SR)’ approach to explanation, and later adopted what he called a ‘causal/mechanical’ approach. My aim here is to briefly explore both of these approaches and what they offer.

You’ll remember that Hempel advanced the ‘deductive-nomological (D-N)’ model for explanations when the causal laws that govern the scientific phenomena are deterministic: ‘if X happens then Y will definitely happen’. He also introduced the ‘inductive-statistical (I-S) model for when the laws are probabilistic (e.g. in quantum mechanics): ‘if A happens there is a 78% chance that B will happen’. Hempel insisted on a high probablity (close to 100% or 1.0) for explanations under the I-S approach. The main reason for this is that, if the probability is lower, A could presumably explain both the occurrence and non-occurrence of B. Say the probability is of B given A is .5, and A occurs, if B occurs we say ‘B happened because A’, but if B does not occur in some sense it also makes sense to explain this in terms of A, since there is a 50% chance that A will not lead to B.

There are also other helpful counter-examples. Jim (who is biologically male) did not become pregnant last year. Jim faithfully took birth control pills all year. Logically, we could say that Jim did not become pregnant because he took birth control pills, but our intuition tells us this is not a valid explanation. The birth control pills are not relevant to explaining the phenomenon. Similarly, being a lifelong smoker only yields about a 20% chance (probability of .2) of getting lung cancer, yet we consider that the smoking explains the cancer.

Similarly, the probability arguments can be complex. Someone who has pneumonia and is treated with penicillin has a higher probability of recovering than someone who does not have pneumonia. We would argue that the penicillin caused the recovery, or at least that it did so in conjunction with the immune system of the patient. (On the other hand, if we observe that taking Vitamin C correlates with recovering from the common cold after about a week we might consider that it is causal… until we realise that most people, Vitamin C or not, recover from the common cold in about a week.

Salmon suggested, therefore, that relevance is important in statistical cases. He also noted, as in the smoking example, that explanations for events with low probabilities can be explained, whereas Hempel’s approach insists on high probabilities.

Let’s go back the pneumonia patient, but add the information that there are penicillin-resistant strains of pneumonia. The simple argument that penicillin improves the odds of recovery is complicated by this new information, and the two classes of pneumonia patients initially – those treated with penicillin and those not – become four classes – those untreated who have the non-resistant strain, those treated who have the non-resistant strain, those untreated who have the resistant strain and those treated who have the resistant strain. In considering an individual patient’s likelihood of recovery, which of these quadrants s/he falls in is statistically relevant.

Salmon adds the additional criteria that (a) all relevant factors must be included and no irrelevant ones and (b) we must divide up our whole population of cases so that we look at an ‘objectively homogeneous’ class in trying to explain something. For example, in the case of our pneumonia patient, we can divde the population into four with two factors, and each of those four groups will be somewhat homogeneous (all members having the same characteristics). But there are potentially other relevant factors, like age, sex, obesity… the list is almost endless. In the end, while Salmon described objective homogeneity is an ideal, he conceded that practical problems mean it is unlikely to be actually useful in constructing and evaluating real explanations. He moved on to consider the important role of causality:

I no longer believe that the assemblage of relevant factors provides a complete explanation—or much of anything in the way of an explanation. We do, I believe, have a bona fide explanation of an event if we have a complete set of statistically relevant factors, the pertinent probability values, and causal explanations of the relevance relations. (Salmon, 1978)

His discussion of causation and explanation gets into Reichenbach’s ‘screening off principle’, conjunctive forks, interactive forks and other complexities that don’t really concern me for the moment.

The big contribution from Salmon to my project is (a) the very thorough overview his book ‘Four Decades of Scientific Explanation’ offers of Hempel’s work and the responses to it up until the late 1980s, (b) his realist approach in contrast to Hempel’s anti-realist approach and (c) the ways in which the statistical-relevance approach, despite shortcomings of its own, fixed some of the shortcomings of Hempel’s approach and led to other interesting work. He also enabled me to think carefully about which philosophers working in this field will need to be considered in depth in my book, for my purposes, and which can be mentioned in brief but not analysed in depth.

Next cab off the rank is Peter Achinstein, whose approach is less rigidly logical-philosophical and more directly focused on what human beings do when we explain. He calls it an ‘illocutionary’ approach, which is just a longer word for the process of giving and explanation and the ‘product’ of that explanation, whether it be written, spoken, animated etc. I’ll be reading Achinstein’s book over the next few days and will report in when I’ve done that.

  1. As you may have guessed, this series is in part a way of sharing the stuff I’m interested in and excited about with others, partly a way of taking notes for myself to remind me of some of the broader themes of what I’m reading… and partly just procrastination from writing the book I’m supposed to be writing about this stuff! I feel as though it’s worthwhile procrastination, though, because if I can explain it for a smart lay audience of my friends it will help me to better understand it for when I write about it more formally.


Salmon, W. (1978). “Why Ask ‘Why?’? An Inquiry Concerning Scientific Explanation”, Proceedings and Addresses of the American Philosophical Association, 51(6): 683–705. Reprinted in Salmon 1998: 125–141. doi:10.2307/3129654

Salmon, W. (1998).Causality and Explanation, New York: Oxford University Press. doi:10.1093/0195108647.001.0001

Realism and Anti-Realism in Philosophy of Science

Filed under: — Bravus @ 9:40 am

There’ll be a much more detailed post shortly about Wesley Salmon’s ‘statistical-relevance’ theory of scientific explanation (a response and extension from Hempel’s ‘deductive-nomological’ and ‘inductive-stastical’ approaches, discussed in an earlier post). In the mean time, though, a quick discussion on realism and anti-realism.

The distinction is that realists accept that the unobservable entities that we use in our scientific explanations such as fields, atoms, electrons, photons and so on are real features of the universe. Anti-realists – and one prominent school within this camp is the instrumentalists – claim that these entities are useful rather than true. They serve their purpose in that they help us to provide explanations that work and theories that allow us to describe and predict observable phenomena, but they are not considered to be in any sense ‘real’. Hempel is an anti-realist, and constructs scientific explanations in terms of logical relations and laws. Salmon, on the other hand, is a realist1.

As a side note, Bas van Fraassen, another important figure in the philosophy of explanation (who, for various reasons, I will mention only in passing in my book) describes his position as ‘constructive empiricism’. While the anti-realist is an ‘atheist’ in terms of unobservable entities and makes the strong claim that they are not real, a constructive empiricist is ‘agnostic’: s/he neither knows nor cares whether they exist, and their reality is not a required feature of the approaches to explanation proposed by van Fraassen and those who follow him.

Salmon essentially uses two arguments in support of the reality of the unobservable. The first relates to extending the range of our senses. He talks about what he can see in a book with tiny print with and without his glasses, and notes that it would seem very odd to claim that the full stops on the page are not real when he has his glasses off but are real when he has his glasses on and can observe them. He then extends this, noting that the optics of a microscope are based on the exact same principles as the optics used in making his glasses, so it makes sense to consider the things that can be observed through a microscope to be real.

The argument then extends to telescopes and things like the moons of the planets in our solar system, which are not visible to the naked eye. The objection has been made by others that we could, in principle, travel to the moons of the planets and verify their existence with our senses but that we can’t (‘Fantastic Voyage’ aside) travel to the microscopic realm to check our observations in the same way.

In response to this, Salmon talks about a process by which a grid is designed at macroscale then shrunk and manufactured at microscopic scale and used for things like counting bacteria in a sample under a microscope. It seems quite silly to claim that, at the scale when we can no longer observe it directly with our unaided senses, such a grid loses its reality.

The final argument is based on the work of Jean Perrin, who started out observing Brownian motion (the way in which very small particles suspended in a fluid (gas or liquid) exhibit random movement, which is explained as being caused by collisions with the particles in the fluid, e.g. water molecules or nitrogen molecules in air). Brownian motion allows the direct observation (though usually aided by a microscope, because particles small enough to be bumped off course by a single molecule are pretty small) of the effects of molecules, although the molecules themselves cannot be seen. Perrin used Brownian motion to find the value of Avogadro’s Number, 6.02 x 1023, a very important number in chemistry that relates the molecular and macro scales.

The really interesting thing, though, is that Perrin then went on to find 13 different and independent ways to determine the value of Avogadro’s number, such as electroplating silver out of a solution and measuring the current used for a given mass of silver, radioactive decays and so on. The fact that a range of independent experiments, across a range of different branches of chemistry and physics, all yielded the same number (within experimental error) is at least pretty strong inferential empirical evidence for the reality of atoms, molecules and electrons.

When we get to photons and other entities at the level where quantum phenomena are dominant, it gets more complex still… things sometimes behave like particles and sometimes like waves. Are they ‘real’? They help us to create good – if complex (literally) – explanations.

I have to admit that, while in general I’m probably inclined toward realism, if I had to swear to it, hand on heart, constructive empiricism would be an attractive approach for me. Or is that just a copout?

There’s a good, if somewhat technical, introduction to some of the issues in explanation here: https://www.iep.utm.edu/explanat/ For me, it makes too much of the implications of this realist/anti-realist distinction, when I find other aspects of explanation more interesting and important, but nonetheless it does a nice job of sketching the last 70 years in the philosophy of this issue, since Hempel and Oppenheim’s seminal paper in 1948.

More Salmon shortly.

  1. Like many other terms in science and philosophy, ‘realist’ has a technical and an everyday meaning. In everyday parlance, a ‘realist’ is someone who takes the world as it is, as opposed to an ‘idealist’ who seeks to work as though the world follows – or ought to follow – some ideal order. It’s important to distinguish that sense of the term ‘realist’ from the technical meaning discussed in this post.