Wow! It’s 10pm on a cool spring night of March 2021, and what do I discover lurking in my drafts? A review of The Structure of Scientific Revolutions from 2018 that I never got around to finishing! (Maybe because Slate Star Codex published a review around the same time and I got dispirited?)

But hey, the draft actually seems pretty good! So in the spirit of anti-perfectionism, here it is, in all its wonderful unfinished glory.

Introduction

Thomas Kuhn’s The Structure of Scientific Revolutions1 has been one of those things you stumble across accidentally and then suddenly start noticing references to everywhere.

What brought it to my attention was a discussion with my supervisor about what kind of impact one should expect to have with one’s research. One of his key takeaways from reading Structure was how hard it is to come up with earth-shattering ideas once you’ve been immersed in a field for along time; that the really new ideas often have to come from someone on the outside.

While digesting this, I realised I’d heard about Structure before - a comment in reply to a post discussing whether MIRI’s difficulty in explaining their research to others is evidence of the research not being ‘correct’:

It seems to me that research moves down a pathway from (1) “totally inarticulate glimmer in the mind of a single researcher” to (2) “half-verbal intuition one can share with a few officemates, or others with very similar prejudices” to (3) “thingy that many in a field bother to read, and most find somewhat interesting, but that there’s still no agreement about the value of” to (4) “clear, explicitly statable work whose value is universally recognized valuable within its field”.

In “The Structure of Scientific Revolutions”, Thomas Kuhn argues that fields begin in a “preparadigm” state in which nobody’s work gets past (3).

Kuhn’s claim seems right to me, and AI Safety work seems to me to be in a “preparadigm” state in that there is no work past stage (3) now.

So I decided to give Structure a read and see what it actually has to say. Here are some notes on what I took away.

Kuhn’s model of scientific progress

At the time Structure was written, science was seen as an orderly, linear process: facts were accumulated on top of previous facts, and that gradual accumulation lead to continuous progress forward.

But progress in any field, Kuhn argued, tends to require some shared set of assumptions, some commonality in how people view the problem; a shared paradigm. Every so often, it turns out that the current paradigm isn’t enough, and problems start to pile up - for example, observations that the paradigm can’t explain. Eventually, someone proposes an alternative perspective that solves the problems, and a shift occurs - a paradigm change - that resembles not so much an improvement as a complete replacement of the previous way of looking at things.

Scientific progress, then, ends up looking less like steady progress and more like a series of random spurts. Long stretches of steady progress under one particular way of viewing the problem, interspersed by revolutions which completely change the definition of what the problem is.

Kuhn breaks this trajectory down into a number of stages.

  • Pre-paradigm. No one agrees about what the nature of the problem is. There are lots of competing frameworks, and not much gets done.
  • Paradigm formation. Eventually, one framework edges far enough ahead to win out. The framework doesn’t solve everything, but it’s good enough to convince people to focus work in that direction.
  • Normal science. Once a paradigm has been established, the bulk of the work that occupies the scientific community is in fleshing it out: resolving open questions, simplifying the formulation, collecting facts.
  • Crisis. At some point, problems with the paradigm start to emerge.
  • Revolution. An alternative paradigm is proposed.
  • Acceptance. Sometimes the new paradigm is accepted straight away; sometimes it takes longer. Eventually, however, the field switches to the new paradigm, and the field returns to normal science.

Kuhn’s model has satisfying explanatory power. But the most interesting part of the book was not in the model itself - it was in the details of what a paradigm is and how a paradigm influences a field.

Key takeaways

Structure is packed with all sorts of interesting nuggets, but there are two points I’ll remember as particularly important.

1. The paradigm restricts your thinking more than you realise.

When you’re used to looking at a problem in one way, it’s hard to then look at it a different way - that much is common sense. But Structure’s many stories of people repeatedly failing to switch viewpoints gave me a much more tangible sense of how much of a problem can be in science.

For example, strong belief in the paradigm can make you gloss over anomalies. One case appears in the phlogiston theory of burning. This historical theory claims that things burn because of the release of fire-like called phlogiston - in contrast to our modern understanding based on the uptake of oxygen. One might expect a key piece of evidence to be the change in mass on burning - but this was largely ignored until the investigations of Antoine Lavoisier:

[Lavoisier] was also much concerned to explain the gain in weight that most bodies experience when burned or roasted, and that again is a problem with a long prehistory. At least a few Islamic chemists had known that some metals gain weight when roasted. In the seventeenth century several investigators had concluded from this same fact that a roasted metal takes up some ingredient from the atmosphere. But in the seventeenth century that conclusion seemed unnecessary to most chemists. If chemical reactions could alter the volume, color, and texture of the ingredients, why should they not alter weight as well? Weight was not always taken to be the measure of quantity of matter. Besides, weight-gain on roasting remained an isolated phenomenon. Most natural bodies (e.g., wood) lose weight on roasting as the phlogiston theory was later to say they should.

Under Kuhn’s model, one might assume paradigm change is a relatively smooth process where anomalies occur and a new paradigm is naturally proposed in solution. But this is not necessarily the case: if you’re too deep in paradigm you may not even see the anomalies for what they are.

So the paradigm influences what you see. But it also influences what you do. In particular, the paradigm can blind you to the important experiments.

Here a good case is the early research of oxygen. One of the first to generate samples of the gas was Joseph Priestley. However, Priestley initially interpreted oxygen as simply particularly pure air. This was partly because of its response to a standard ‘goodness of air’ test. The test involves mixing the sample of gas with another gas, nitric oxide, over water. When ‘pure’ air is subjected to this test, the volume of gas decreases significantly, whereas with ‘impure’ air (e.g. air in which a flame had burned), the volume of gas decreases less. We now know this is because the nitric oxide reacts with the oxygen to form nitrous oxide, which dissolves in the water. By coincidence, with the standard ratio of nitric oxide to test gas, both pure oxygen and normal air leave about the same volume of gas, but for different reasons: in the former case, all the nitric oxide is used up, leaving only oxygen; in the latter, the oxygen in the air is used up, leaving mainly nitric oxide and nitrogen. This lead to Priestley categorising oxygen as just air - even when trivial variants of the experiment, such as examining the properties of the gas that remains, or trying different proportions, would have revealed the starkly different behavior of oxygen:

Only much later and in part through an accident did Priestley renounce the standard procedure and try mixing nitric oxide with his gas in other proportions. He then found that with quadruple the volume of nitric oxide there was almost no residue at all. His commitment to the original test procedure - a procedure sanctioned by much previous experience - had been simultaneously a commitment to the non-existence of gases that could behave as oxygen did.

The last sentence seems particularly significant: your assumptions shape the design of your experiments in ways you don’t always realise.

(An example of this in machine learning is why pretrained convolutional networks work so well. The straightforward explanation is that the network picks up good priors from the data it was trained on. Operating under this view, it might never occur to you to try using untrained network. But that’s exactly what Deep Image Prior and Sanity Checks for Saliency Maps do, showing that a) image restoration works great even with untrained networks, and b) that several of the methods for generating saliency maps actually produce the same outputs for untrained networks as for trained networks. This points to a very different hypothesis: that pretrained networks work more because of the prior induced by the convolutional structure itself - a conclusion you would never reach allowing only your initial intuition guide your experiments.)

Finally, paradigms can influence what kinds of ideas you consider scientific - even in neighbouring fields. I’ll let the text speak for itself:

Before Newton was born the “new science” of the century had at least succeeded in rejecting Aristotelian and scholastic explanations expressed in terms of the essences of material bodies. To say that a stone fell because its “nature” drove it toward the center of the universe had been made to look a mere tautological word-play, something it had not previously been. Henceforth the entire flux of sensory appearances, including color, taste, and even weight, was to be explained in terms of the size, shape, position and motion of the elementary corpuscles of base matter.

Gravity, interpreted as an innate attraction between every pair of particles of matter, was an occult quality in the same sense as the scholastics’ “tendency to fall” had been. Therefore, while the standards of corpuscularism remained in effect, the search for a mechanical explanation of gravity was one of the most challenging problems for those who accepted the Principia as paradigm.

Unable either to practice science without the Principia or to make that work conform to the corpuscular standards of the seventeenth century, scientists gradually accepted the view that gravity was indeed innate.

The resulting change in the standards and problem-field of physical science was once again consequential. By the 1740’s, for example, electricians could speak of the attractive “virtue” of the electric field without thereby inviting the ridicule that had greeted [such ideas] a century before. As they did so, electrical phenomena increasingly displayed an order different from the one they had shown when viewed as the effects of a mechanical effluvium that could act only by contact. In particular, when electrical action-at-a-distance became a subject for study in its own right, the phenomenon we now call charging by induction could be recognized as one of its effects. Previously, when seen at all, it had been attributed to the direct action of electrical “atmospheres” or to the leakages inevitable in any electrical laboratory. The new view of inductive effects was, in turn, the key to Franklin’s analysis of the Leyden jar and thus to the emergence of a new and Newtonian paradigm for electricity.

To the extent that AI safety is currently pre-paradigm, this makes me feel wary about accepting a paradigm too soon and then getting locked into it. (Edit: at least, this is the way I felt in 2018. I’m not sure I still feel that way now, in 2021.) However, at the same time:

2. Sometimes a flawed paradigm is better than no paradigm at all.

Chapter two, The Route to Normal Science, goes into some detail about what a field looks like immediately before and after a paradigm is established. What surprised me here was the rallying effect a paradigm can have; how even a flawed paradigm can enable progress that wouldn’t have been made otherwise.

Two passages in particular provide clear illustration of this idea.

Those electricians who thought electricity a fluid and therefore gave particular emphasis to conduction provide an excellent case in point. Led by this belief, which could scarcely cope with the known multiplicity of attractive and repulsive effects, several of them conceived the idea of bottling the electrical fluid. The immediate fruit of their efforts was the Leyden Jar, a device which might never have been discovered by a man exploring casually or at random.

Franklin developed these ideas into the one-fluid theory of electricity. As with our modern theory, discharge was explained as flow from an object with excess fluid to an object deficient in fluid. But the theory was wrong about attraction and repulsion, which were explained in terms of forces exerted by excess fluid, and therefore could not account for the phenomenon (known at the time) of repulsion between negatively-charged objects, viewed as deficient in fluid. Despite its problems, the one-fluid theory went on to become the first theory to gain widespread acceptance.

Freed from the concern with any and all electrical phenomena, the united group of electricians could pursue selected phenomena in far more detail, designing much special equipment for the task and employing it more stubbornly and systematically than electricians had ever done before. Both fact collection and theory articulation became highly directed activities. The effectiveness and efficiency of electrical research increased accordingly, providing evidence for a societal version of Francis Bacon’s acute methodological dictum: “Truth emerges more readily from error than from confusion.”

This sentiment resonates strongly with me. This is definitely true on a smaller scale: one tip for resolving confusion while studying that I picked up from Scott Young is to formulate a tentative hypothesis as soon as possible. It’s much faster to fix the problems with an initial guess than it is to re-read material until you get to the right answer in one step.

Conclusions

What are the practical ramifications of reading Structure?

When I wrote this review in 2018, it made me think we should aim for more diversity of intellectual experience in AI safety. Instead of encouraging people to go into safety straight away, maybe it would be better to have them study something else for a while, then come into safety later. Otherwise they’ll learn reinforcement learning and be locked into that way of thinking.

In 2021, though - again, I’m not so sure about these old thoughts. I’m not sure people really are that inflexible. I’ve seen colleagues at DeepMind grapple enthusiastically with, say, Cartesian Frames - about as different from the standard view of RL as you can get.

So to wrap up this old draft I’ll say: I’m not sure Structure really does have any practical implications. But it’s definitely a fascinating read; so if you’ve run out of other things to do during lockdown, I highly recommend it.

  1. Cover photo from University of Chicago Press