Is intelligence a quantity?

This post is inspired by this book, which I’m currently in the process of reading:

What is a quantity?

Aristotle, in his Metaphysics, defines two kinds of quantity: multitude and magnitude. A multitude is a collection of discrete objects that can be counted. A magnitude is more interesting. The Aristotelian quote says “A quantity is a multitude if it is
numerable, a magnitude if it is measurable.” That’s not very helpful. In Elements, Euclid defines magnitude roughly as the thing that is comparable to other magnitudes of the same type, where “comparable” means wholly divisible. (Wholly because Euclid only accepted whole numbers.) Euclid then develops a theory of ratios based on this comparison of magnitudes.

As time goes on, the requirement that two magnitudes are only comparable if they stand in a ratio of two whole numbers is relaxed. Newton defines number to be the abstracted ratio of a magnitude of a quantity to a unit. Quantity, measurement, and numbers are therefore tightly knitted together: mathematics is the science of quantity; all measurements are of quantities; quantities are the only things in nature that can be measured and, consequently, the only things can be studied by empirical science. This sentiment that empirical (meaning measurable) science must be quantitative (therefore mathematical), in one form or another, can be found in Descartes, Kant, and many others.

The definition that measurement is the estimation of a ratio between a quantity (magnitude) and a unit, which I’ll follow Michell in calling the “classical” definition of measurement, is no longer the dominant one in psychology. Instead, many psychologists subscribe to the account of S. S. Stevens, which defines measurement as “assignment of numerals to objects or events according to rules” (1951). Other than that the assignment can’t be random (Stevens, 1975), there’s little formal constraint on how such rules are to be formulated and justified. Two of the books I’m currently reading on psychological measurement, Michell’s book and Denny Borsboom’s Measuring the Mind, both hate Stevens’ definition.

Modern measurement theory is, by and large, based on understandings similar to Stevens though not necessarily directly motivated by his own writings. A cluster of ideas (notably operationism, conventionalism, verificationism, positivism, pragmatism), with different motivations and different levels of independence from others, all converged on this view that a consistent description of observations is just as good as anything nature can give, presumably because 1) consistency is hard enough, and 2) any ground we might use to differentiate competing theories both consistent with observations is, by definition, not empirical. 20th century measurement theory then became an activity of axiomatization — what observations are consistent with what structures? Axiomatize the observations and find out! This understanding of measurement — which we can call measurement-by-homomorphism — is clearly very different from our colloquial understanding of measurement, which is more like noting down observed properties in the world (what Michell prefers) or having the world causally produce results in a measuring device (what Borsboom prefers).

Measurement theorists before Stevens tended to build homomorphisms “from the ground up” by identifying important features of quantities that can serve as natural interpretations for numerical operators. For example, if there is a natural concatenation procedure for a quantity (e.g., end-to-end attachment for length, putting on top of each other for weight) and it functions like addition through a homomorphism (Helmholtz) or satisfies certain more precise axioms (Hölder), then it can serve as the interpretation of addition upon which a homomorphism between this quantity and the real numbers can be built. Even though this way of measuring things is not the same as comparing the ratio of the lengths of two rocks in the way Euclid had in mind, there is still a sense in which we need to start by noticing something concrete about the world — about the quantity under measurement, if you will.

Once we go past Stevens, however, we’re truly in float-land. Stevens is interested in the invariance properties of homomorphisms or “scales”. Basically, the invariance properties pick out how much structure a quantity has and whether that structure can be mapped onto the real numbers and how many of these mappings are homomorphisms of each other. The more structure a quantity has, the fewer mappings it can admit.

What does “a quantity” mean in this sense? Recall that it used to be that a quantity (magnitude) is what we can compare with another quantity (magnitude) of the same type using rational numbers. Some of that spirit is still present in Stevens’ scales: the interval scale is often explained as one where the distance between numbers “has meaning”. There is a sense in which the distance between numbers is occupied by some stuff, and there is an objective fact of the matter whether this stuff is more or less than the stuff that occupies another distance. Consistent with this thematic continuity, it is common now to say that things measurable using interval and ratio scales are quantitative, whereas ordinal and nominal scales are qualitative.

How might intelligence be not measurable on a quantitative scale?

How might anything be not measurable on a quantitative scale?

Quantification is so common these days it’s hard to think about what it means for something to be a thing without being a quantity. Here’s an example I came up with:

Consider “depth of conversation”. Some conversations are deep; some are shallow. Given two conversations, people can probably assess which one is deeper, and they probably agree a lot of the times. The fact (or, my expectation) that people agree means that there is something we are measuring; it’s not just random guessing.

Can we make this measurement more precise? One thing we can do is note down the average length of sentences spoke, with the expectation that deep conversations consist of longer sentences. Length of sentences is a quantity measurable on a ratio scale. Suppose length of sentences correlates very highly with subjective assessments of depth of conversation – does it mean that depth of conversation magically becomes a ratio quantity, then?

That doesn’t sound right, but that’s pretty much how intelligence works. The instruments we use to measure intelligence are often quantitative in a somewhat-reasonable way. We judge them to have good validity because they predict things we think intelligence predicts. Consequently, intelligence must be a quantity, right?

To make this analogy even further. Suppose we notice that some people have more deep conversations than other people. So… these people clearly have a higher “deep conversation capacity”, right?

But wait, there are many other reasons why someone might have more deep conversations than others. Maybe they are theorapists and their job is to have deep conversations with people. Maybe they are slightly social anxious and so avoid small talk as much as possible. In any case, why is “deep conversation” suddenly a personal aptitude thing?

In case you’ve missed it, this is also how intelligence works. Some people have greater academic/ intellectual/ creative success in life than others, so it must be something internal about them that make them successful. What else could it be?

Does it matter?

This is the real question. In one sense, if intelligence isn’t a quantity and we thought it was, we are getting nature wrong, so of course it matters. In another sense, however, if measurement is to be understood as simple comparison of structures (which I, along with the authors I’ve been reading, don’t especially like), then thinking there to be more structure than there is doesn’t seem too harmful.

(In any case, I would really appreciate any reason on why it’s harmful, since I’m writing a paper on this.)

Kino
Latest posts by Kino (see all)