in response to a couple of questions

July 19, 2009

I was speaking with Maysoon, a talented graduate student I know in the Boston area, and she raised a couple of interesting questions, one of them touched on in last night’s post and the other not touched on for several months on this blog. Let’s go in reverse order:

1. How can you start writing without knowing all that’s been said on the subject before? Isn’t there a worry of reinventing the wheel?

My own practice is never to read the secondary literature on a topic until I’ve first clarified my own initial thoughts about it, and even done a bit of writing on it. It seems to me that starting with the secondary literature serves both to delay the actual start of the project, and to sap one’s self-confidence about having anything to add to the debate. Moreover, it’s a lot more rewarding to read the secondary literature if you already have some thoughts of your own about the topic.

I’ve told my own story before, but it’s worth telling again for the dissertation writers among us. My big problem with the dissertation came from continually rewriting and polishing the first third of the book, especially the first half of that third. If you go and look at the first 5 sections of Tool-Being, they must have been revised dozens of times, and all told I wasted about three years doing that. I was also reading tons of Heidegger during those three years, so it’s not like I was merely procrastinating. I was just procrastinating on the writing part, the most important part.

At that point in time, I wouldn’t say that I knew the Heidegger secondary literature especially well. In this initial stage it seemed like a better investment of time simply to read Heidegger himself, not what other people were saying about him. (It’s the St. John’s graduate in me. For those who don’t know the College, all readings are primary sources, and secondary sources are even somewhat discouraged. The upside of this strategy is that it gives you a permanent fearlessness about tackling primary literature in just about any subject, including the hard sciences to some extent.)

Then came that point in time when I realized I wasn’t making much progress. The trigger for that realization, as I’ve said, is that my roommate was about to finish up and go on the market. I extrapolated my past rate of speed and realized it would be around 2005 before I finished if I kept going that slowly. And that was a horrifying thought.

So, I finished off the first third, and went on to the second third, which is the part on the secondary literature. I dreamed up a highly unrealistic plan for finishing that part, which was to alternate days: I’d spend one day reading a whole book or numerous articles on Heidegger by some prominent commentator, then spend the next day writing about it. Then turn on the third day to reading another commentator, then on day four writing about that commentator. And so forth.

However, this completely unrealistic schedule was not the least bit unrealistic: it worked. Why did it work? Because in the first third of Tool-Being I had already figured out the basics of everything that I think about Heidegger. Once that was finished, it was actually fun to read the secondary literature, and see where I agreed and disagreed with all the commentators. It was one of the most enjoyable 4-6 week periods of my intellectual life. By the end, I had written so much material that my advisor made me cut about 50 pages of it (a good idea on his part). And most of all, I felt completely at ease in the field. I had attained, very quickly, a good sense of where my interpretation fit amidst all the others.

If I had started with the secondary literature, however, it could have been a disaster. This way of proceeding tends to lead to timid qualifications and overly moderate claims that risk nothing. There can be an excessive deference to existing commentaries if they shape your initial view of a topic. There can be a tendency to quote too much. And this was my advisor’s one bad idea, which I refused to follow: flipping the order of the first two parts of Tool-Being. Never start with a survey of the secondary literature. Among other things, it bores the hell out of readers.

(On a related side note, some of you may remember the controversy surrounding Daniel Goldhagen’s book Hitler’s Willing Executioners, which claimed that anti-Semitism is inherent in German culture. I’ve never read Goldhagen’s book and so have no firm opinion about it. But the most preposterous thing I read in any review of the book went something like this: “Goldhagen’s book is a classic doctoral dissertation. Oh, how stupid all the rest of us were! We all missed it!” These remarks were not only sarcastically smug, they were also utterly inaccurate. The “classic doctoral dissertation” is not a brazen, gutsy, controversial claim like Goldhagen’s. The classic doctoral dissertation is a meek, unrisky, deferential, competent piece of analysis. Try not to write a “classic doctoral dissertation.” Try to write something with a backbone, making definite claims in your own name.

But more generally, there is always a danger of reinventing the wheel. It’s a natural hazard of the intellectual professions. That’s why it is good actually to present papers and publish things, because it brings you into contact with more and more people, all of whom have read certain things that you’ve never read. (Everyone has gaping holes in their reading background, though much of it ends up being concealed by bluffing.) It is quite common when reading biographies, as I love to do, that an important thinker comes across an unfamiliar book in old age and pounds the table and says: “Dang it! It would have saved me 20 years if I had read this book in my youth.”

2. How can you start writing before you have each of the steps of the argument perfectly worked out? If you do that, aren’t you taking a risk that you’ll find out one of the arguments is wrong and that will ruin everything that comes afterward?

It’s an understandable worry. We can call it the “domino theory” of argument. One false step means all the later steps are destroyed.

It was Whitehead who observed that philosophy has wrongly borrowed the method of deductive inference from mathematics. One faulty step in a geometry proof does mean that everything later is ruined. But I agree with Whitehead that philosophy doesn’t work the same way. There is a certain autonomy to each level of a philosophy that can partly withstand even faulty reasons given to justify it.

A good analogy would be everyday objects. A table is built of legs and a top. Each of these parts is built of molecules. The molecules are built of atoms. The atoms are essentially built of quarks and electrons. And so forth. But shifting the position of one atom does not destroy the table. This is sometimes called “redundant causation”– many possible arrangements of atoms will give rise to the same table, and there is no cascading catastrophe when one small part is shifted, or at least not most of the time. (There are indeed cases, even in philosophy, where one mistake ruins everything that comes next. But nothing like in geometry.)

A related point comes from Emerson, who says something along the lines of “who cares for Berkeley’s or Spinoza’s reasons?” Maybe it wasn’t Berkeley and Spinoza, and maybe he said arguments rather than reasons, but it was something like that. His point, as I see it, is that when we read someone like Spinoza, even though to some extent we are following his arguments step by step, that’s not all that we’re doing. Many of Spinoza’s “arguments” strike us as laughable. But we play along, seeing what comes next.

And much of what comes next turns out to be of great value, even if we ridiculed some of his arguments along the way. The reason for this is that philosophical concepts are also marked by a sort of “redundant causation,” just as a table is. Many different arguments can be used for the same concept. If someone decimates one of your arguments for something, there will be a tendency to hang onto your concept for awhile and look for new defenses for it. By no means does this show a lack of intellectual integrity. What it shows is a perfectly warranted suspicion against any particular argument for or against that concept. If you uphold a specific idea, it will not be solely because of the argumentative steps that got you there. It will be more because you see a specific power or clarity in that idea, as when Crick, Watson, and Rosalind Franklin all agreed that “the double helix is too beautiful a structure not to be the truth.” That’s not an “argument,” of course, but it’s sufficiently powerful evidence that they would have resisted any initial evidence falling against the double helix.

Normally, we abandon a theory not just because somebody happens to make a single good counter-argument against it. (This does sometimes happen, but it’s relatively rare.) The evidence against a concept has to reach a certain critical mass before we throw it out.

When I think of Spinoza or Hegel, I think primarily of the concepts they ended up with, and only secondarily if at all of the exact chain of reasoning by which they reached it. You don’t have to borrow that exact chain of reasoning in order to borrow the concept and transplant it horizontally into your own thought. When you import an idea, you don’t have to import its entire history as well. An idea is partly “emergent” beyond the arguments that gave birth to it.

So, it doesn’t bother me to leave a few half-formed arguments on pages 20 and 30 while moving on rapidly to page 150. It is a very bad thing to remain stalled on a single problem, because that problem is not necessarily the key to everything else you might be able to do. Being wrong is far from the greatest intellectual sin. And every book has points where it is wrong. Just look at Plato and Aristotle, the two greatest philosophers who ever lived, sliced to pieces by my 16-year-old freshmen a couple of times per year. And yet we do not abandon Plato and Aristotle. Why not? Not because “they had great historical importance in their time.” No, they are great philosophers even now. But their greatness does not consist in making fewer mistakes than other people.

%d bloggers like this: