Design a site like this with WordPress.com
Get started

Essay: Some general principles, part II

[I’ve written this post to stand on its own, but readers curious about why the numbers start with “3” will want to take a look at the first post in this series. ]

Make things explicit

Make your commitments explicit – at least to the greatest extent possible. That’s where I start these considerations. Most of us, most of the time, run on a kind of autopilot, not really thinking about what we are doing or why, and letting habituation run things for us. This phenomenon should not come as a surprise to anyone, really, and I don’t intend to argue that one should work to remain fully aware of everything all the time. I would guess that just about everyone (at least everyone who lives in the modern world of jobs and commutes) has gotten in the car or on the bus and found themselves halfway to work before remembering that they meant to go to the drug store.

Injuries throw this phenomenon into sharp relief. My knee has improved substantially since I injured it in March, but I still have to be careful how I move. I was shelving books in the archives on Monday and needed to get up to the top shelf. I stepped onto the little rolling stool (you know, the vaguely cylindrical one all libraries have), and stepped up. In so doing I put just a bit more pressure on my injured knee than was wise, while also twisting a bit. Nothing popped, and the pain went away as soon as I straightened my leg out, but the experience told me that I still need to exercise caution. I can’t let my knee go on autopilot. Yet. Injuries, then, also show that the ability to do things without having to actively concentrate on them is also crucial to normal functioning.

Another reason for making one’s commitments explicit has to do with the therapeutic effects of writing these thoughts out. I don’t intend to suggest that writing about your problems will solve them (not least because I have no medical training), but at least in my own experience, getting these things out does often help one identify places where one might have some agency, an opening for something new or different. It also helps one find the knots and holes in their world, the places that still need elaboration – or even to exhumed.

On that note, I’ll continue with my general principles. Again, I take these as kind of “rules of thumb” that tend to structure my life, but that don’t go “all the way down.” Hopefully they make some kind of sense.


3. You don’t just see the world – the Earth worlds

I don’t think it constitutes going out on a limb to suggest that a real, mind-independent world exists outside of us. However, the suggestion that any individual could have complete, “objective” access to this world – to “see the world as it is” – strikes me as totally bananas. And also harmful.

For Heidegger, the “earth worlds” (where “worlds” is a verb). This means that the Earth, or parts of it, coheres into a particular “world” for a particular person based on that person’s existential projects. That is, how the world appears has to do with who a person is and what that person, therefore, does. Once you start actively trying to notice birds, you will see them everywhere, all the time. Try it. Once you learn how to surf, the ocean looks different than it did before. Now you see that the calm waves that appealed to you for swimming or floating aren’t any good for surfing. What appears and how it appears to you as the Earth worlds will differ for everyone. This is not to say that each person sees a different and mutually incompatible world – I think it’s pretty safe to say that the point here is rather that the valence or trajectory, or flavor of each person’s world differs from that of anyone else’s.

Timothy Leary (I think) called this a “reality tunnel,” making use of the emic/etic distinction from anthropology to make this point. For those unfamiliar, an “emic” perspective is “within” the object of study. A historian studying the social effects of Sufi lodges in Late-Ottoman Turkey who is also herself a Muslim, approaches the topic, at least partially, from an emic perspective. Another historian studying the same topic and period who is not a Muslim, on the other hand, would be approaching his research from an “etic” position, “outside” the object of study. Now, problems exist with this distinction, not least because any observer affects their object of study somehow, and often in ways unpredictable ahead of time. So, really, no one ever has a “pure” etic perspective because one must have some connection to the object/topic/person/etc. at hand in order to make any sense at all of it. But though “purely” etic perspectives remain beyond reach, one can, crucially, remember that one has a particular perspective.

Making one’s principles and beliefs explicit serves a useful purpose here, as well. Like most of these principles, I question (when not flatly denying) the possibility of transparency rather than translucence. I can know that I have a particular position, and bear this in mind when talking with other people – especially people I disagree with – but I can’t once and for all, thoroughly and systematically lay it all out. Why not?

For one, where is the “I” that would do this? Are these not “my” opinions and commitments? If “I” can see through myself, I have to see myself seeing myself, then myself seeing myself seeing myself, ad nauseam. Whatever “I” sees through me has to already be behind me, but by then I have to “I”s! So no. On pain of infinite regress, I cannot know my own mind transparently. But I can, by making my thoughts explicit, gain some translucence. How?

Let’s say I spend an hour or so writing out some thoughts. Then I get up from the table, make some tea, eat dinner, watch TV, whatever it is people do. During most of these activities I return to autopilot mode. [Just to clarify, this is not bad. It is necessary for life and not (completely) avoidable.] The next day, still in autopilot mode, I get up, go to work, do whatever it is I do for a job, come home, and find my notebook still sitting on the table. I idly flip through it while waiting for some water to boil and turn to the page where I made my thoughts explicit. Suddenly, I find myself face to face with myself but outside of myself. I see the words I wrote and find myself “snapped out” of the autopilot, if only temporarily. I have more thoughts on the imporatnce of what French philosopher Bernard Stiegler calls “tertiary retentions” or “epiphylogenetic retentions,” which lead to the next principle.

4. Cognitive structures don’t stop at the inside of your skull.

I consider it a significant limitation that people seem to think that their brains are where they do all their thinking. Even more limiting is the notion that one’s “self” is something external, transcendent, and unconditioned – a Cogito from…somewhere driving one’s body around like a car. Maybe the most limiting idea, actually, is the accompanying notion that thoughts are not “things” that motive action or that one has some degree of control over. Thoughts aren’t “real,” since they happen “inside.” My principles on this require careful elucidating, and I’m not even sure I’m getting what I want across. I’ve given preliminary and clumsy presentation of some of these ideas in my most recent Report from the Workshop, so hopefully readers won’t lack all familiarity. Consider principle #4 the most “work in progress” of the principles so far.

Consider: When you say what you think, you use words that are publicly available. Wittgenstein’s “beetle in a box” thought experiment has convinced me, at least, that we don’t have access to a “private” language – to use language, we must be integrated into a previously existing symbolic structure and adopt its use and conventions. When one speaks their native language, it feels “natural.” One might occasionally struggle to find the mot juste or to push a phrase off the tip of one’s tongue, but in general one’s native language doesn’t feel like speaking a language at all – it just feels like speaking. Contrast this with learning a foreign language (especially as an adult) – even speakers with a high degree of proficiency might still struggle sometimes and make mistakes. It takes a long time of dedicated practice and use for a language other than the language one grew up speaking to just feel like speaking.

Further, consider the ways that we use language. Speech comes first chronologically, although one listens long before one can speak with ease. After speech come reading and writing, complicated skills that sometimes present distinct challenges, but that, in most cases most of the time, any child can learn. The means by which we read and write – clay tablets, papyrus, bamboo slips, palm-leaf manuscripts, rag paper, digital screens – all exist outside of us (in the sense of not being part of our bodies). But today, at least in “developed” countries, it is nearly unthinkable for an adult to have never learned any reading and writing. “Functional” illiteracy, or not having read a book ever again after high school, we can understand, but not not knowing what a book it as all.

Reading, then, stands as a kind of language use that requires things “external” to us – objects that do not have vocal chords, mouths, and lungs. And these objects, once we adopt them, cannot subsequently be separated. No matter how hard I try, I cannot “un-learn” how to read the languages that I read. I might be able to train myself to focus on the letters I read rather than their significance as words, but even if I study type-faces for their aesthetics, I read the words the type spells out.

A “reading” mind is, then, different than a “non-reading” mind. And the differences don’t stop there. Consider reading a physical codex versus reading a digital version of the same text. While both count as “reading,” in the sense that one visually decodes arrangements of symbols, these readings nonetheless differ substantially in a variety of ways that are too detailed to go into here. Suffice it to say, that writing a to-do list on a piece of paper, or in an application on a smartphone, is a good example of the extent to which one’s mind does not stop at the inside of one’s skull, or even at the inside of one’s body generally.


I hope the reader will forgive my clumsy writing in this post (and the previous one in the series). Part of working to make one’s principles explicit involves making false starts and persisting at the edge of one’s conscious experience. I’ve insisted that one doesn’t have fully transparent access to the contents of one’s mind, but one can gain a certain degree of translucence. I’m making these posts public because it strikes me that writing so that others might read and understand what I’m talking about forces me to take an even further step outside my own head. I can’t rely on shorthands and assumptions, since I remain aware that others might not share them.

Advertisement

Essay: Some general principles part I

Course adjustments

The other day I spent a bit of downtime writing out an attempt at formalizing (or at least making explicit) the general principles from which I tend to operate. I don’t pretend to have listed them all here, not least because I make clear in several of the principles that one doesn’t and can’t have “pure” or “transparent” access to one’s mind (not least because one’s “mind” doesn’t exist the way most people think it does). Asymptotic access, maybe, although even there I have questions.

I like to try and make these things explicit when I experience significant shifts in the demands on my time and effort, shifts which demand not only that I change what I spend my time doing, but also, in some sense, who I “am.” Semesters always end like blast doors crashing down. I go from more or less constant activity – grading papers, checking email, reading for class, writing for class, worrying about whether I’ve forgotten to do any of these things, wondering whether I have any drafts I could punch up and publish, etc. – to…nothing. I have a job this summer and other obligations, but these don’t keep me quite as busy as the semester does. Taking time to make my rules of thumb explicit means I get a chance to tap the brakes and avoid spending the summer spinning my wheels.

Speaking of rules of thumb: the principles I list below don’t work (for me) like hard-and-fast commands, or articles of faith. In fact, most of the time I don’t even realize that they structure my behavior, hence the value of making them explicit. If I know something about myself more or less clearly and explicitly, then I can make the conscious choice to continue operating along those lines, or try something else.

In the list below I have tried to articulate my principles (as of May 2022, anyway) as clearly as I can. [As I wrote and elaborated these principles, it grew clear to me that listing them all in a single post would prove too long, so I’ve just included the first two here and will address the others in further posts.] I have also tried to include relevant citations and sources of inspiration for these principles. [Keep an eye out for a post dedicated to this – it’ll take some time for me to put together.] I should emphasize, again, that these principles don’t exhaust my commitments. Nor do I argue that these principles hold in all places and times or that every person should adopt them. I can say from experience that explicit and consistent application of these principles has improved my experience of life in significant way, but that doesn’t mean everyone else will benefit the same way.


1. You are what you do, and vice versa

“Being” and “doing” operate in a recursive relationship that one can symbolize as a kind of “advancing” spiral (scare quotes because the advance does not approach a predetermined goal, but nonetheless moves in a general direction at any given moment).

No clearly definable difference exists between thinking (including “staring off into space,” “getting lost in thought,” writing down ideas, manipulating models, etc.) and doing things like mowing some grass or eating an apple. Thoughts affect the body/mind (on which more below) in a way similar to how physical activities do, but one mustn’t confuse the levels (see point X).

Despite no final teleology ahead of time (at least that one can know for certain rather than believe in hope for), a target exists at any given time under any given arrangement and schema of mind/body. That is, if the mind/body remain in their current relationship, one will tend to approach a certain point. And since one will eventually die, the point at which one dies can define the target retroactively. [For my own purposes, as an atheist, this makes sense to me. I recognize that those coming from theistic positions will differ on this point, but I nonetheless would argue that belief in a final target, in a universal teleology, doesn’t mean the same thing as knowledge of that final target. Belief in a final moral purpose to the universe does not preclude acting as though the future remains open and, to some extent, malleable. Knowing that the final teleology exists, and acting on this knowledge the same way one acts on the knowledge of where the nearest Walgreens is, brings problems. But that’s a topic for another post.]

Think of a ship underway. The ship could go to any number of ports, although not every single one of them. If the captain points the ship in broadly the right direction and just guns the throttle, the ship will probably not ever arrive at its destination because wind, currents, collisions, and all manner of other things will affect its course trajectory. Even a thickish mat of barnacles on one side might cause a list that takes the ship far off course without regular examination and correction (e.g. what I intend with this post). One should also note that a ship’s captain not only makes these regular course measurements and adjustments, but also logs them externally to him/herself in a publicly accessible form.

2. Not transparency, but translucence

One “never” (or as close to never as makes little difference) has total, transparent access to one’s mind/body. In support of this claim, consider the experience of being “drawn up short.” For example, consider two good friends having an argument. Things get heated – maybe they’ve had a couple drinks too many – and one says something absolutely unforgivable to the other. In the moment following this traumatic (in the sense of “resisting symbolization”) irruption, both friends just stare at each other – neither knows the other, now. The friend who made the outburst apologizes, but the dice remain cast. One cannot un-cross the Rubicon.

Experiencing this feeling of getting “drawn up short” doesn’t have to come from a situation like the one described above, but I trust that the reader will understand what I mean. While I gave a negative and interpersonal example, a variety of things can “draw one up short.” Catching a glimpse of a particularly spectacular mountain vista from the corner of one’s eye, narrowly missing stepping on a dog turd on the sidewalk, acute chest pains that turn out, after an EKG, to have come from bad gas. In Being and Time, Heidegger describes the experience of using a hammer, only for the head of the hammer to go flying when one tries to fasten a nail.

In The Myth of Sisyphus, Albert Camus writes that the person (he uses “man” throughout, but means “person”) who recognizes the absurdity of life gains what he calls “lucidity,” a new sense of the world and one’s possible places in it. The experience of being “drawn up short” offers an opportunity for a kind of lucidity in that it throws into stark relief the assumptions one makes about the world and its consistency.

To return to the metaphor of the ship, think about what would happen if the captain of a large ship, say a container ship of some kind, put the vessel underway and then violently cut the steering to the left or right. While here I probably demonstrate my ignorance of how modern ship steering works, for the purposes of the metaphor, we can imagine two salient possibilities. First, as the rudder slams into the suddenly adamantine water, it twists its internal mechanisms and snaps. Now no steering is possible. Insisting on constant, “pure” transparency of the mind/body to oneself leads, at least potentially, to total stasis and movement only at the whim of sea and sky. [I would argue that being “blissed out” and getting “beyond thought” in, for example, many forms of contemporary Buddhist practice or Pentecostal ecstasy, can lead to this possibility.] The second possibility involves the ship capsizing. The rudder and steering mechanisms hold, but the vessel lists too much, starts taking on water, and begins to sink. Here again, no steering is possible and one loses the control one had, however modest. Scylla and Charybdis, without even any monsters. This second possibility represents the fate of the Romantic, obsessed with some kind of “truth” to themselves that sits beyond their daily mind/body lives and efforts. As the ship sinks, it hopes to leave a beautiful corpse.

Against these possibilities stands the acceptance of translucence, the idea that one can and does have some access to the “internal” working of one’s body/mind, but that one cannot (and should not) attempt to gain “complete” access. I call this state “translucence” because something always gets in the way, but one can nevertheless see clearly enough to steer and make modest course corrections. Think of the difference between a naked light bulb and a lamp with a shade on it.


It would seem that I have now committed myself to a series of posts on this topic. Hopefully others find something of use in these principles. Keep an eye out for probably another two posts outlining the rest of my general principles (again, as of May 2022), as well as a final post consisting of a bibliography of works that I’ve often found useful or edifying. I still have some last assignments to complete before the end of the semester on Sunday, so I will probably post these latter entries this coming weekend or next week.

[For anyone interested, it turns out that I adopted this idea from a series of podcast episodes put out by Hilaritas Press, the literary executors of American writer Robert Anton Wilson. Each episode outlines the basic work and ideas of some of the thinkers and writers that influenced Wilson. RAW has remained a strong influence on me since I first encountered his work in high school, so I suppose this laying out of principles also serves as a kind of tribute to him. In any case, you can listen to the podcast here.]

Essay: Care and the Green Thumb

WARNING: If you have no patience for elliptical style, riffs and digressions, or etymological wordplay, best skip this post.


Problematic: What does it mean to have a “green thumb?”

For Heidegger, one properly acts through the hand. (Do note the singular.) Insofar as humans (which are not all Dasein, and, at least for Dreyfus, vice versa) have hands, we properly act. The hand distinguishes the human from the non-human in acting.

Of course, an immediate objection arises: what about the great apes? Or Old and New World monkeys? What about elephants, whose trunks are at least as capable of handling finicky bits as a human’s fingers? As Derrida argues pretty convincingly in The Animal that Therefore I Am, Heidegger’s thinking privileges humans over other species, thus inadvertently continuing a tradition that places humans, if not at center stage, then at least at the top of the playbill. Any attempt to identify and designate a specific difference between human and any given animal fails, on Derrida’s account, not least because one could always find examples of individuals that are not human doing things that, supposedly, only humans can do. Of course, DNA sequencing makes this trick even easier. I have a lot more common with a pumpkin than one might initially suppose. (A fact which I rather like. Pumpkins, when planted as part of a Three Sisters bed, provide shade and keep the soil cool and moist for the beans and corn. I’ve always felt more comfortable with support/maintenance roles – a point I will return to below. Besides, pumpkins are kinda round and squat, much like myself.)

For the moment, I want to bracket concern with differentiating humans from animals. While I find Derrida’s contributions useful and important, it nonetheless remains obvious to me that, even if one cannot clearly and permanently distinguish humans from species that are not human (and that this lack of distinction bears ethical ramifications), differences nevertheless persist.

Rather than the hand, then, I would look to the thumb, the means by which one (a human and a Dasein, for the time being) grips, encircles, takes hold of. In German, a concept is a ,,Begriff,” reminiscent of “gripped.” One encircles with a concept, creates a barrier or boundary (or perhaps a membrane), a place to hold on – a grip. In Heidegger’s “A Triadic Conversation,” the character of the Scholar most clearly represents the power of the ,,Begriff,” of the concept as boundary.


[A brief riff, if the reader will indulge me. Humans act through the hand, but this does not apply to all humans. Even bracketing for the moment individuals with impairments or motor difficulties, at a much more basic level the hand does not represent our originary means of “handling” things in the world. How does a baby interact with the world? By putting things in her mouth. One often reads “human” to mean “adult human” (historically also “white,” “male,” and “free” or “property owner.” But how did those adults get to the point of using only their hands to interact, with the mouth relegated to food, drink, medicine, stimulants, and (sometimes) the mouths and genitals of others? The mouth takes in, and indiscriminately, until the hand mediates the encounter.]


The longest of Heidegger’s “conversations” (collected in Country Path Conversations edited and with an excellent introduction by Brett W. Davis) takes place on, you guessed it, a country path. Three conversants, a Guide, a Scholar, and a Scientist, take up again a conversation they had left off a year earlier. As the conversation carries on, the Guide seeks to convince the Scientist that, contrary to popular belief, one can describe science as an applied technology, rather than the other way around. The Scientist, a physicist and positivist, resists these ideas, remarking that the Guide’s words make him feel “groundless” or dizzy. For the Scientist, the Guide is LSD in the water. But not so with the Scholar.

As the conversation ambles on, the Scholar tries to find ways to identify and encircle the Guide’s words. Some statement reminds him of Leibniz, or Spinoza. Unlike the Scientist, whose disciplinary specificity and (necessary!) rigidity make him an easy window to smash, the Scholar has a much more flexible immune response. He enlarges the circle of a concept, broader and broader, until it can, potentially, fill all of space. The Scholar, one could say, has a much firmer “grip.”

The range of the Scholar’s ability to “grip” novelty into his existing handhold makes him (an assumption – we don’t actually know from the text) a tougher nut to crack for the Guide (whom I think one can safely say represents Heidegger more or less in earnest). To the Scholar, anything the Guide says can be identified with an existing concept and fit into an existing schema. Resemblance oozes subtly into identity.

I have, of course, a literary analogy for this phenomenon. In William Gibson’s Pattern Recognition (probably his most interesting novel, in my opinion), the protagonist Cayce Pollard (about whom more in this post) travels from New York to London to Tokyo to Moscow, and each time finds herself playing a kind of game where, when faced with difference, she tries to fit it into an existing schema. Parts of London (which she calls the “mirror world”) are “really” like New York. Parts of Tokyo are “really” like London. Anyone who has traveled extensively, especially to big cities, will recognize this pattern of behavior, a pattern made increasingly understandable (if no more laudable) by the homogenization and leveling of global culture. For me, Shanghai “really” was just like Paris until I turned off the main thoroughfares and found myself firmly back in China again. But then I passed a Burger King, entered a Starbucks, and placed an order in English, at which point I could have found myself pretty much anywhere.


[I beg the reader’s indulgence for another riff. Starbucks, it seems to me, best represents the homogenized no-place subsuming cities large and small. I have visited Starbucks locations in several countries on three and a half continents, and each only stands out as a separate place in my mind because of its differential surrounding context. For example, I visited one in Shanghai located inside a huge multi-layer mall that I found garish and too bright. It looked just like all the “nice” malls I have ever visited, but something felt a bit “off,” like how UHT milk from a box doesn’t taste like fresh milk. Another Starbucks, in Mexico, I remember because the inside of the shop was too intensely air-conditioned, leaving the glass door to the outdoor seating area covered in a thick layer of condensation. It gets hot on the Yucatan Peninsula.

One might respond that McDonalds would serve as a better example of homogenization. I would not disagree. Initially I would say that McDonalds has more of a functional or even “low class” set of associations and homogenizes “from the ground up,” but that doesn’t exactly work since, for example in China, one can buy fast food from street vendors for much cheaper. McDonalds isn’t haute cuisine there, but it’s not a cheap source of fast and convenient calories. Again like Cayce Pollard, whose usual “allergy” to haute couture brands bothers her less in Tokyo than it does in London, context matters. Nonetheless, I think that Starbucks, which I associate with people tap-tapping away on MacBooks, better represents the digital and aesthetic homogenization of culture. Maybe a homogenization from the inside out, from the aspirational and downwardly mobile middle- and consuming classes that serve as insurance against overproduction. A smoothing of culture, as Byung-chul Han puts it in Saving Beauty. To put it a bit vaguely, a McDonalds anywhere feels like more of a “real place” to me than a Starbucks anywhere.]

Now, I don’t mean to suggest that making comparisons or finding similarities is some kind of problem in and of itself. You need some existing schema to apprehend a new idea, at least initially. Learning the grammar of your own native language makes learning a foreign one easier (or at least less totally baffling). The problem arises when all novelty is “fittable” into one’s schema ahead of time. We don’t live in a modular world, where pieces can go together in various ways, but are nonetheless standardized. This isn’t Legos. Heidegger’s Scientist needed his rigid positivism not only to actually conduct scientific research, but also to allow for the possibility of going beyond his scientism. Byung-chul Han writes (somewhere, I don’t have the citation right now) that knowledge differs from data in that knowledge is gained against resistance. The Scientist’s rigidity creates precisely such resistance. The Scholar’s erudition, on the other hand, more amorphous and loose than the Scientist’s, runs the risk of souring and overrunning the entire world. Like a gas, there’s nothing to push back against. Every Starbucks looks like all the other Starbucks, even if the layout and specifics differ slightly. If you’ve seen one Starbucks, you can probably literally imagine them all.


Speaking of Starbucks, where they wear green aprons, I now sense the approach of the point of this excursion, like a change in the wind. To return to the green thumb.

The thumb serves to grip, to encircle, to make concepted – ,,zu ‘Begriffte’ machen.” As we saw with Heidegger’s Scholar, this gripping broaches the possibility that, as Ecclesiastes would put it, “there is nothing new under the sun.” Everything strikes one simply as “like” something else. One cannot any longer imagine novelty so new that it passes through to trauma.

The green thumb, then, a subspecies of thumb as it were, “grips” and encircles. But now, we must ask: what does it encircle? How hard does it grip? Does the wrist remain loose and flexible, or taught, tight, under pressure? Do the muscles of the forearm suffice to accomplish the hand’s goal, or do you have to put your back into it and slip a disc? Does the grip involve all five fingers? Both hands? (Heidegger, to the best of my knowledge, does not ask or answer these questions. Part of his problem with typewriters has to do with one properly acting “through the hand.” Of course, as Don Ihde points out, this is a clear indication that Heidegger never learned to type with any proficiency.)

A green thumb means its holder (its haver? its bethumbéd?) can keep plants growing and alive. Many people described as having “green thumbs” can, of course, tell others in explicit terms how to care for plants, but their ability nonetheless continues to strike others as peculiar and impressive. And even they themselves cannot exhaustively describe their own capability. Why? Because “having a green thumb” does not mean “knowing all about plants and being able to express that knowledge systematically and precisely in symbolic form.” To those poor souls who always kill their succulents, the “green thumb” is magic , something almost preternatural of which they despair of learning. But this is a mistake.

The meaning of a “green thumb” really comes down to this: a particular way in which the green thumb “grips” the world. It is not a way of knowing in the sense of exhaustively and systematically articulating symbols through recall, but rather a way of comportment, a mode or key of being.

Consider an analogy with your native language. We say that one “knows” one’s native language, but we really mean something more like one lives one’s native language. (To put it in Heidegger’s terms, “language speaks us.”) Aside from sometimes struggling to find the right word, or occasional stumbles, one does not need to remember anything to speak one’s native language. Don’t believe me? Spend six months working diligently but not too intensely on Duolingo (any totally unfamiliar language will do), then take a trip to a place where that language is the native language of most of the population. If possible, try to avoid big cities where you are more likely to encounter others who can translate for you.

What will happen? Well, Duolingo works pretty well, so you’ll get up to speed on basic terms and meeting basic needs quickly enough. But beyond that, you will find yourself thrown for a loop. You will find, in your stumbling attempts to navigate the world and interact with others, hat how you communicate with others plays a significant role in forming both who you are to others and to yourself. The most difficult (and intimidating) part of learning a new language is the plummeting feeling of having to learn how to be yourself again.

A green thumb – or an eye for photographic composition, or an ear for musical composition, or a good arm in baseball – works the same way. One doesn’t “have” a green thumb or “know” a green thumb. One is a green thumb. That is, the green thumb serves as a descriptor of a mode of being in the world, one that cannot be exhaustively expressed because it does not come after the one doing the being – it is the being.

Another analogy might help. I do not know how to surf. If I accompany a surfer to the beach and we both look out onto the ocean, she and I will see different things. Not “literally” (at least assuming we have similar levels of visual acuity, etc.), but rather in the sense that the surfer will be able to tell if it’s a good day for surfing, and I won’t. She might be able to explain some of how she knows this, but not all of it. And, unless my being already exists in some sense “adjacently” to the being of a surfer, I may not even understand the things she is able to explain. However, if I begin learning to surf, if I practice surfing, if I become a surfer, then maybe someday she and I will be able to once again walk onto the beach and both see whether the waves are good that day or not.

The green thumb works the same way. One has to learn how to be such that one has a green thumb. While this learning must incorporate explicit symbolic knowledge to some degree, the real work, the real learning, and the real change in being comes from the doing, and from the becoming.

The green thumb, as a thumb, grips, it creates and holds concepts of the world. But the green thumb differs from, for example, the Scholar’s pre-configured means of expanding his grip, precisely because plants are not symbols. The mimosa tree in my front yard is, if the conditions are within a certain range, gonna mimosa. Period. I can help it along, shelter it, take care of it, feed it and water it, but fundamentally, the plant is doing its own thing. The green thumb “grips” the plant, but it can never do so completely, simply because the plant does not allow itself to be fully symbolized. It is outside of the human in a significant sense, and even an exhaustive knowledge of horticulture does not preclude the possibility of plants dying for what appears to be no reason. For all that one’s symbolic knowledge of plants can expand and expand, it eventually founders on the brute reality that the plant is not up to you.

And here we see the most salient facet of the green thumb. Insofar as it does “grip,” conceptualize, and encircle, it does so in the knowledge that this is only ever a kind of loose grip, a conceptualization that may prove useful in some cases, but ultimately fails to fully encircle its charge. It is a grip of care, the careful grip with which one holds a child’s hand while crossing the street. This is not a grip one can learn except existentially. By doing. And in so doing, by changing not just what one knows, but who one is.

Essay: That’s exactly what they want you to think.

(Formerly posted as “Report from the Workshop: 04/29/2022,” but I decided it’s much too long for a report and should stand on its own.)

Semesters are like volcanoes: they simmer and simmer for a long time without anyone thinking much of it, and then they decide one morning to violently explode.


Yesterday I submitted an outline that resulted from a semester-long independent study on Heidegger’s thinking of ontological “death” and its ramifications for education. I started with education, but somehow ended up creating a pretty ambitious research project involving existential death, conspiracy theories, and the epistemological necessity of vulnerability.

I say I started with education because, to be honest, I’ve grown tired of thinking about education. Problems in education increasingly strike me as consequences of more general (and therefore more invisible) social, technological, and epistemic limitations. Talking about “education” on its own seems more and more like missing the forest for the trees.

Of course, as a (sometime, amateur) Marxist, I shouldn’t find this surprising. The depredations of capital flow affect all aspects of social life, although differentially in different domains. I’m finding myself gravitating more and more toward what Heidegger calls Gelassenheit, “releasement,” a term he cribs from the German mystic Meister Eckhart and reappropriates for his own use. Where Eckhart would advocate “releasement toward God,” Heidegger would advocate “releasement to the things (in the world).” This allows one to “return to oneself” and see one’s existential situation anew and (potentially) with greater clarity. It’s also the exact opposite of the way that networked digital media platforms want people to behave. Thoughtful, meditative behavior doesn’t play well on platforms that run on a fuel of “engagement.”

[Upon reflection, it strikes me that one could read Gelasssnheit as a kind of “blissed out” disconnection from the world. I don’t think Heidegger intends this reading, although it’s not difficult to see how one could make this mistake. Rather, and I think this is important, releasement” for Heidegger is releasement to the world and how it shows itself to us. That is, one’s usual and unthinking apprehension of the the world is “broken” and then “reset.” I hurt my knee about a month ago, and though it’s much better now, it still feels different than it did before – I actually have to think about walking and pay attention to where I set my foot. I’m imagining releasement as occasioning something similar.]

For a while now I’ve been mulling over the idea that we humans, especially but not exclusively those of us in the Global North, have been “domesticated” by a product of our own invention. Networked digital technologies are “pharmacological” in that, on their own, they don’t have a positive or negative valence. Two aspirin help a headache. A bottle of aspirin, however, will kill you. It isn’t exactly a question of quantity, but rather of distribution and following impulses. Every time you get mad at something you see on Twitter and “clap back,” Twitter is literally (and I mean this in the dictionary sense, not for emphasis) making ad revenue from the reflexive operation of your neural pathways and fight-or-flight reflex because the more you stay online, the more angry and invested you get, the more fucking ads you are exposed to. It’s like if you “worked” in a factory where every time the doctor made your knee twitch with that weird hammer the hospital administrator got money from the hammer manufacturer. (Maybe that is how doctors work, I don’t know.) But that’s an essay for another time. Right now I want to talk about serendipity.


As I was typing up my outline to turn in I realized that several of the books I’ve read “for fun” this semester have borne direct relevance to the social epistemological questions I’m beginning to pose. This happens to me pretty regularly, actually, and it’s probably just a case of apophenia, seeing patterns where there aren’t any. Of course, if the universe is one unified thing and any individual and their sensory apparatus is a distinguishable part of it that, nonetheless, follows similar rules as elements of the universe at much larger and much smaller scales, then who is anyone to say that there aren’t patterns? Maybe we just need a different point of view.

{The sentence starting with “of course” in the above paragraph is dangerous. The astute reader will understand why. If you don’t understand why, just recognize that I was, and again literally, fucking around up there.}

Let’s talk about books. First, I started reading a collection of Philip K. Dick’s short stories in January. I keep the volume by my bed for nighttime reading, so I haven’t made a ton of progress through it. But even Dick’s weaker offerings bear the distinctly clammy and metallic odor of paranoia. His VALIS trilogy, written after a kind of mystical experience he underwent and then tried to work through in his Exegesis, features a millennia-long conspiracy in which the Roman empire never died and continues to enslave humanity. Wild. In Dick’s fiction, nothing is as it seems, and there is often no way out. (Incidentally, I appreciated the most recent Matrix movie for driving this point home. I’m a congenital contrarian, so I love that film because everyone else seems to hate it, but I also love it because Lana Wachowski strikes me as dedicated to not infantilizing her audience with a clearly spelled out “message.” Just like the previous installments in the series, the “moral” of the story is: “take a minute to think, you philistines!”)

I also began Robert Shea’s and Robert Anton Wilson’s Illuminatus! trilogy, a send-up of acid-trip political paranoia from the 60s and 70s. The narrative structure is experi-mental (see what I did there?) with point of view changes galore and makes reference to a wide variety of very specific conspiratorial schemas. The intention is clearly to satirize paranoia, but the novel does so in a way that leaves the reader unsure of just what the “real story” might be. My opinion, for what it’s worth, is that this uncertainty regarding the “real story” is good. Since Descartes, philosophers have looked for “absolute knowledge,” knowledge we could know without a shadow of a doubt that we knew. Personally, having read the bit of Descartes’ Meditations where he gets to his famous cogito, I think he may have been trolling. In any case, the spectre of “absolute knowledge” looms large and nastily. For a Biblical literalist, any challenge to a truth claim made by the Bible potentially throws the whole thing in question. Hence the literalist’s jumping through ever-more-spurious hoops to save the phenomenon. But here’s the problem: this kind of face- and phenomenon-saving behavior is now characteristic of everyone. Why can’t things be “true enough?” Or, saints preserve us, fucking metaphors?

Umberto Eco’s Foucault’s Pendulum, which I just finished the other day, actually makes that last point explicit. It’s the story of three editors at a publishing house who basically use a computer program (named after a medieval Kabbalist) to invent a global Knights Templar-themed conspiracy after encountering a strange Colonel with what he claims is a decoded Templar message. At first it’s a joke, designed to poke fun at the spurious dot-connecting done by the “Diabolicals,” enthusiasts of the esoteric who constantly submit manuscripts “revealing” hermetic and conspiratorial secrets. The editors are hard-headed skeptics, with what Eco describes as a kind of hard-headedness apparently congenital to the Piedmont region where they come from. Over time, however, that all starts to change. As the Plan becomes more and more real to them, and the stakes start getting higher, the narrator Casaubon reflects that he and the others have, precisely, lost the ability to think metaphorically or figuratively. The novel is deeply tragic, even though it is, like Illuminatus, intended as satire.

I’ve often thought that fiction is a better vehicle for some ideas across than non-fiction (especially in philosophy). Genre fiction like sci-fi or thrillers seems especially useful to me, and partially because it isn’t (or hasn’t historically) been taken seriously. Crichton’s Jurassic Park, for example, makes what seems like a pretty persuasive argument for at least some amount of caution in biological engineering, but when Jeff Goldblum’s lines get turned into memes, the thrust of the argument gets obfuscated.

Foucault’s Pendulum has been described as “the thinking [person’s] Da Vinci Code,” and I think that’s right. The point of the novel is to show that the logic of conspiracism leads to an abyss. When everything can in principle be connected but there is no nuance, no sense of when and which kinds of connections are appropriate, one falls into the trap of having no choice but to try and become omniscient. This is impossible (for a human being, anyway), and so omniscience comes to mean imprisonment in a miasma of one’s own epistemological overindulgences. It doesn’t even make sense to call it a “web” of connections anymore because a web has a particular valence – it isn’t arbitrary. While Eco could have probably made this point quite clearly in an essay (or, haha, a blog post), the novel’s form, that of an upper-level airport thriller, gets the reader in the guts in a way that making claims and articulating arguments does not.


“Interesting,” the reader has by now mumbled to themself a few times. “So you just happened to read several books that all had to do with paranoia and conspiracism, and then decided to do more research on this phenomenon? Seems pretty straightforward to me.”

I agree, actually. I’m not trying to argue otherwise. Rather, I’m trying to demonstrate that there doesn’t need to be a straight line from point A to point B in all cases, and even where such a line does in fact exist, one might not be able to perceive it until after the fact because, wait for it, the line itself might not exist until after the fact. (Hegel, whom I haven’t read, calls this Nachtreglichkeit, “retroactivity.”) That is, there’s a difference between conspiracy and serendipity, but sometimes this difference is hard to perceive. Either way, one should wonder, “does there need to be a reason?”

The final book I want to talk about, William Gibson’s Pattern Recognition, deals with a kind of serendipity of perception and offers a potential corrective for the pathological drive to omniscience. Probably best known for his earlier Neuromancer, Gibson basically invented the cyberpunk genre. Pattern Recognition, however, doesn’t exactly fit that mold. There are computers, of course. The plot actually comes to revolve around a series of film fragments of unknown provenance unearthed on the (2002) internet but the digital technologies and the world of the setting are all “real” and recognizable. The novel also has to do with, as the title suggests, pattern recognition, and seeing patterns where there aren’t any. But over the course of the novel the reader watches protagonists who don’t gain victory over the world of networked technologies and final, full understanding, but rather find a kind of catharsis in not knowing for sure.

The protagonist, Cayce Pollard (pronounced “case,” though she was named after the American mystic Edward Cayce (pronouned “casey”)), works as a freelance “cool-hunter,” roaming urban streets on the lookout for the Next Big Thing in fashion. She has a strange and somewhat uncomfortable ability to “sense” whether a logo will “work” on the market or not, as well as a complete intolerance for brand names and logos which she describes as a kind of “allergy.” Gibson makes a fair bit of hay over, for example, Cayce’s clothing – tags carefully cut out, the pseudo-brand of a “casio clone” watch sanded off. (Many of these descriptions read like museum copy twenty years on, which I think adds to the novel’s interest.) Cayce doesn’t know how she does what she does, only that it works. When she is hired by Hubertus Bigend, a Belgian businessman in the mold of a proto-Elon Musk, Cayce finds herself connecting her business of evaluating logos with her passion for finding whoever is making the mysterious online footage. Think Indiana Jones but it’s a black-clad woman from New York who does Pilates in the early 2000s. (Just to be clear, this description is intended as a positive appraisal of the book.)

While parts of the novel now feel dated (no smartphones, people communicate by calling and emailing rather than DM’ing, etc.), it nonetheless remains eerily resonant. The reader learns, about halfway through the novel, that Cayce’s father, Wingrove Pollard, worked as a security contractor for American intelligence services securing embassies. Win disappeared on the morning of 9/11/2001, but there has been no proof positive if he is dead or not. The novel takes place soon after 9/11, and the trouble with Win’s undeath has to do with his estate – Cayce and her estranged mother, who lives in a kind of hippy commune dedicated to scanning rolls of tape for so-called “electric voice phenomena” (EVP), cannot claim Win’s inheritance until he can be proven to be dead. But that isn’t really concerning to Cayce. Rather, the really concerning thing is not knowing.

There’s a lot of not knowing in this novel, and I would argue that the catharsis Cayce eventually reaches (which I won’t spoil) serves as a useful model for how we ought to live now. 9/11 has faded into the background of the American psyche over the last twenty-plus years (although not from American politics, unfortunately), but we still find ourselves living in a world beset by bad things happening for reasons opaque to us. The rush to claim that covid-19 was a Chinese-developed viral weapon, for example, tries to find an “explanation” for something that, insofar as it posed a threat to global health, at least initially simply had to be dealt with. I think it likely that scientists will know for certain where and how covid came from in my lifetime, but I don’t think we know now. That doesn’t stop speculation, though, driven by the pain of not knowing, of feeling the rope slip through our fingers as we hang over the abyss, unsure whether anyone will come and save us.


Pattern Recognition presents the reader with two questions that eventually merge into one for the protagonist: “who makes the mysterious videos?” And, “what happened to my father?” One of these questions is, eventually, answered. The other, however, is not. Or not completely. Not beyond a shadow of a doubt. But even with this possibility of doubt, Cayce finds a way to live. To “pollard,” in horticultural terminology, means to cut down a tree, leaving a stump from which new, straight branches will sprout. It’s a means of sustainable forestry because a few pollarded trees can produce lots of wood for quite a long time, rendering cutting down other mature trees unnecessary. One could read Cayce’s last name as reflective of the myriad possible coulds she encounters. There isn’t a main trunk to speak of – the postmodern “proliferation” has replaced the late-modern “grand narrative.” Coming from the position of Descartes, or later of Kant’s sapere aude!, “dare to know!,” the only choice in a world of massive complexity and scale seems to many of us to try, like the editors in Foucault’s Pendulum, to make sense of it all. The desire to become omniscient, to become God, to become identical to the universe itself, is a desire not for immortality and certainty, but for un-death and the constantly grinding need to continue suspecting. Either it all makes sense, or none of it does, says the inheritor of an Enlightenment grown malignant, and the abyss calls louder, louder.

What saves us from the abyss? Well, at least from my perspective, certainly not God. Neither will History, Justice, The Next Generation. The arc of history only bends toward justice if it is made to bend. The universe on its own seems supremely unconcerned with the whole thing, like a dandelion blowing in the breeze. We’re on our own and, like Cayce Pollard, unsure of what’s what. But also like Cayce Pollard, we’re not each of us all alone. Pollards produce myriad new growths from a single stump. We can still help each other, even if no one person finally “knows the score.” And we can also keep each other honest. Not necessarily by arguing, but simply by wryly asking, like the skeptical Piedmontese editors in Foucault’s Pendulum before they succumb to their own game, “you really think so?”

It would be all too easy for me to look at my reading this semester and think, “oh wow, I guess it’s my destiny to write about conspiracy theories since I read these books without realizing it!” But then, when I hear myself say this out loud, the other me that is identical to me but from further along the timeline, grins and says, “you really think so?”

Final Reading Log of 2021

The end of 2021 marks five years since I began logging my reading. Normally I would have posted this list right at the beginning of the new year, but, as they say, I had some “life” come up and didn’t have time or inclination to post.

Going back through my reading logs always brings me up short by demonstrating how much of my life I don’t really remember. I’ve written before (in last year’s post) about using my reading log as a kind of mnemonic, and going back through this year’s reading really brought that back. Since January 01, 2021, much has changed in my life. M and I moved to a new state, I started a new graduate program, we leased an apartment sight-unseen then bought a house. Not to mention things like the January 06 invasion of the US capitol, the ongoing pandemic and its vicissitudes and mismanagement, etc. I like to wonder what some latter-day archaeologist or archivist would make of my reading habits. I have no idea if I read more eclectically than others, but looking back over my lists makes me wonder just what the hell my deal “is.”

For example. I started 2021 reading Star Bridge by James E. Gunn and Jack Williamson. I remember getting a copy of this book as a gift for (I think?) Easter one year, at about age 10. I re-read it several times, but hadn’t revisited it since middle-school, or thereabouts. Though published in the mid-twentieth century, the novel now reads like something substantially older, almost archaic. The science fiction genre has changed quite a bit since Gunn and Williamson wrote, and some bits of Star Bridge would no doubt bar it from publication today (plenty of casual sexism, stereotyping of Chinese Americans, kisses (and more) without explicit positive consent, etc.) A straightforward action story with a gruff but honest protagonist against a sprawling empire, it hits all the old-school Silver Age notes of individualism, violent rejection of bureaucracy, and the assertion of a tough masculinity too fundamentally decent for one to describe simply as “too horny for his own good.”

But despite all this, Star Bridge still does something that I find very interesting: it tells a story in an original way, with the assumption of a reading public, a literary/literate audience. The genre tropes it reproduces sometimes obscure what I consider a genuinely well-written novel. An action hero novel, sure, a space Western definitely, but nonetheless a novel that assumes readers capable of reading well.

Several other books I read in 2021 make these same assumptions. Robert Sheckley’s stories tend less toward the horny action hero, but nonetheless assume the same kind of literate public that Star Bridge does. Since Denis Villeneuve’s Dune (which I loved) came out this year, I re-read the original series and found the same assumptions in Frank Herbert. These pieces have a density, a kind of seriousness and exactitude that might strike one as strange in science fiction until one remembers that these authors took their readers seriously, as literate adults. Stephen King’s The Dark Tower series operates in a similar vein. In On Writing, King describes his poverty-stricken childhood in Maine. His family didn’t have a TV until King had reached school age, so he had to learn to read pretty quick to have something to do. As much as it makes me sound like a fuddy-duddy or a (pseudo) Luddite, I cringe whenever I see little kids engrossed in their parents’ smartphones today.


My companions for 2021 also includes a number of books on “kook” stuff. I’ve wanted to write about “kooks” for a while, but haven’t quite found the angle I want to pursue yet. That said, I found John Keel’s The Mothman Prophecies very interesting. I haven’t seen the movie, but the book struck me as, like the writers I mentioned above, taking its readership and its subject seriously, but without lapsing into the sort of performative emoting that seems de rigeur for contemporary writing, at least on the internet. Steve Volk’s Fringe-ology also takes a serious look at various “unexplained” phenomena like lucid dreaming. Starting from a position of scientistic skepticism, Volk eventually finds that actually taking these ideas seriously makes them much more difficult to simply cast aside. I approve of this position (as John Keel wrote, “belief is the enemy”), and of Volk’s willingness to take out-there ideas seriously. In these cases, and in the case of Avi Loeb’s Extraterrestrial, the writer takes their subject and their audience seriously, and respects the reader enough not to descend into pleading and thence to strident, self-righteous anger in an attempt to head off critique.

[Loeb presents an interesting case. A big-deal astronomer at Harvard, his book claims that ‘Oumuamua, a celestial object briefly detected by radio satellite in 2019 actually represents a piece of space debris from an extraterrestrial civilization, namely, a piece of a solar sail. As a thoroughly non-astronomer, I can’t follow the math (not much – the book has a general readership) that Loeb uses to support his claim. However, I do find it fascinating and maybe even a bit of a relief that big-deal scientist types still allow flights of fancy and out-there ideas into their heads.]


I’ve also learned something important from 2021’s reading. I don’t read as well as I would like to. In 2020, I decided on a goal of fifty books that year. I met the goal, but in so doing failed to realize that I had prioritized quantity over quality. What good does reading widely do me if I don’t remember what I read or have no way of applying it in my life and thinking about it? (Incidentally, one does not only think with one’s brain. Thinking takes many forms, including bodily movements, the state of one’s surroundings, and so on.) In 2021 I read sever books that, while I remember reading them, don’t call up anything much beyond that. Social Inquiry After Wittgenstein and Kuhn by John G. Gunnell, for example. I read it, but don’t remember it. Ethan Mills’s Three Pillars of Skepticism in Classical India likewise only reminds me of the day I spent in the copy room at my old substitute teaching job reading it.

Of course, one need not read deeply every single thing one reads. Barbara Demick’s Eat the Buddha and Nothing to Envy, for example, stand out as fascinating travelogues, but I don’t feel bad about not taking notes on them. I read them for fun (if by “fun” I mean learning about famine in North Korea and violence against Tibetans in Western China.) Merlin Sheldrake’s Entangled Life Connie Barlow’s The Ghosts of Evolution, and Robert MacFarlane’s Underworld, however, I should have read more carefully. Oh well, I guess now I have an excuse to revisit them.

That said, this year I want to read better. I’ve started by going back to paper codices, the good stuff. I don’t mind reading on a screen when I can’t help it, but deep down I want the physical object in my hands. My books all have a place on their shelves now, which I love. When I worked as a teacher, I used to say “do less, better” to focus on the hard work of actually making sure my students understood and could independently apply what we went over in class. A good foundation helps make the other stuff easier. Seems I’ve forgotten to take my own advice.

I won’t describe the specific steps I intend to take towards this goal of reading better here. That will probably make its way into a stand-alone essay.

Anyway, below you’ll find the complete list of my reading for 2021. It doesn’t include chapters of books or essays/papers I read for grad school. For one thing that would make the list way longer. For another, I like to that my incompleteness will, by turns, infuriate and relieve whatever hypothetical graduate fellow ends up having to process my “papers” after I die. (Godspeed, friend.)


TitleAuthorsStarted ReadingFinished Reading
The Ghosts Of Evolution: Nonsensical Fruit, Missing Partners, and Other Ecological AnachronismsBarlow, Connie2021-12-262021-12-26
Hunters of DuneHerbert, Brian; Anderson, Kevin J.2021-12-24
Chapterhouse: DuneHerbert, Frank2021-12-182021-12-22
Heretics of DuneHerbert, Frank2021-12-132021-12-18
Heart of the Shin Buddhist Path: A Life of AwakeningShigaraki, Takamaro; Matsumoto, David2021-12-032021-12-13
Four Lost Cities: A Secret History of the Urban AgeNewitz, Annalee2021-11-292021-12-08
The Cleanest Race: How North Koreans See Themselves and Why It MattersMyers, B.R.2021-11-282021-11-28
Eat the Buddha: Life and Death in a Tibetan TownDemick, Barbara2021-11-252021-11-28
Nothing to Envy: Ordinary Lives in North KoreaDemick, Barbara2021-11-242021-11-25
God Emperor of DuneHerbert, Frank2021-11-212021-12-13
Children of DuneHerbert, Frank2021-11-112021-11-19
Dune MessiahHerbert, Frank2021-11-022021-11-05
On the Genealogy of MoralsNietzsche, Friedrich Wilhelm2021-10-292021-11-21
Pious Nietzsche: Decadence and Dionysian FaithBenson, Bruce Ellis2021-10-202021-11-24
DuneHerbert, Frank2021-10-162021-10-31
Beneath the Wheel: A NovelHesse, Hermann2021-10-102021-10-22
Lila: An Inquiry into MoralsPirsig, Robert M.2021-10-032021-10-10
A Time of ChangesSilverberg, Robert2021-09-252021-09-28
A, B, C: Three Short NovelsDelany, Samuel R.2021-09-062021-09-13
Entangled Life: How Fungi Make Our Worlds, Change Our Minds and Shape Our FuturesSheldrake, Merlin2021-08-252021-09-04
Tower of GlassSilverberg, Robert2021-08-022021-08-03
Sphere: A NovelCrichton, Michael2021-07-272021-08-01
Song of SusannahKing, Stephen2021-07-202021-07-22
Wolves of the CallaKing, Stephen2021-07-072021-07-19
The Wind Through the Keyhole: A Dark Tower NovelKing, Stephen2021-07-032021-07-06
Wizard and GlassKing, Stephen2021-06-292021-07-02
The Waste LandsKing, Stephen2021-06-022021-06-07
The Invention of Nature: Alexander Von Humboldt’s New WorldWulf, Andrea2021-05-292021-08-11
Wittgenstein and PsychoanalysisHeaton, John M.2021-05-262021-05-27
Underland: A Deep Time JourneyMacfarlane, Robert2021-05-192021-05-25
The Drawing of the ThreeKing, Stephen2021-05-022021-05-08
The Dark Tower I: The GunslingerKing, Stephen2021-04-262021-05-02
On Being With Others: Heidegger, Wittgenstein, DerridaGlendinning, Simon2021-04-222021-05-13
Three Pillars of Skepticism in Classical India: Nagarjuna, Jayarasi, and Sri HarsaMills, Ethan2021-04-202021-04-29
Victim PrimeSheckley, Robert2021-04-202021-04-20
The 10th VictimSheckley, Robert2021-04-172021-04-17
On ViolenceArendt, Hannah2021-04-112021-04-11
Applying WittgensteinRead, Rupert2021-04-052021-04-17
The Mothman Prophecies: A True StoryKeel, John A.2021-03-292021-04-03
EyeHerbert, Frank2021-03-202021-08-10
Social Inquiry After Wittgenstein and Kuhn: Leaving Everything as It IsGunnell, John G.2021-03-042021-04-04
Fringe-ology: How I Tried to Explain Away the Unexplainable-And Couldn’tVolk, Steve2021-02-262021-03-04
The Super Natural: A New Vision of the UnexplainedKripal, Jeffrey J.; Strieber, Whitley2021-02-182021-02-24
Spurs: Nietzsche’s StylesDerrida, Jacques2021-02-132021-02-23
The CodexPreston, Douglas2021-02-082021-02-11
Extraterrestrial: The First Sign of Intelligent Life Beyond EarthLoeb, Avi2021-02-022021-02-05
Store of the Worlds: The Stories of Robert SheckleySheckley, Robert2021-01-292021-03-04
Star BridgeGunn, James E.; Williamson, Jack2021-01-272021-02-16

Meditations on Re-Reading

Précis: Meditations on re-reading The Dark Tower series. Thoughts on the practice of re-reading, especially at regular intervals; considerations of the temporal experience of re-reading and what it can tell us about making a better world.


NOTE: SPOILERS AHEAD

My wife gave me a complete set of Stephen King’s The Dark Tower for our wedding anniversary, and I’ve decided to attempt a yearly re-reading of the series. I read the series for the first time about two years ago on the recommendation of my father-in-law and loved it. Now that we’re all quarantined and the sense of time is slipping and getting fuzzy (at least for me), I found myself gravitating back to The Dark Tower for reasons that I’ll make clear later. I’ve just finished volume two of the series, The Drawing of the Three. I have final papers to write and final grades to assign in the next week and a half or so, so I probably won’t get started on volume three, The Waste Lands, for a couple weeks.

NOTE, AGAIN: SERIES SPOILERS AHEAD

The Dark Tower series is cyclical. The first and last lines of the series, which King has described as one long novel in several volumes, are the same: “The man in black fled across the desert, and the gunslinger followed.” Roland’s quest for the tower is an eternal cycle, although one that might be eventually end. He and his ka-tet, those bound by destiny, travel though worlds that have “moved on,” with the trappings of technology and civilization degrading and degenerating into unusability. Through a series of trials, the ka-tet travels to the Dark Tower, the nexus that holds together all of the worlds to face the Red King who is intent on destroying the multiverse held together by great Beams that intersect at the Dark Tower. I’ll leave it to the reader to find out for themself whether Roland is successful.

The cyclical nature of The Dark Tower saga makes it an interesting point of departure for some meditations on re-reading. I will mostly focus here on re-reading novels, but will address re-reading non-fiction (especially philosophy) toward the end of this essay. I’ll be making quick and dirty use of some ideas from the work of French philosopher Gilbert Simondon (filtered through that of another French philosopher, Bernard Stiegler.) This isn’t a formal paper so I’m dispensing with footnotes, etc. Besides, I’m really just using on concept as a starting point.

In the first reading of a novel, everything is new and surprising. The reader is pulled along through the narrative both by its novelty and by the impulse of the plot. The plot basically implies or poses questions – what happens next? how does this end? Even in formulaically written “genre fiction” like detective novels, techno-thrillers, or supernatural romances, part of the pleasure of a well-written novel is the way it manipulates the reader’s expectations and provides novelty. For example, Jim Butcher’s Dresden Files plays with the hard-boiled genre of noir detective fiction by making its protagonist a wizard (really – and it’s not YA fiction like Harry Potter). Long-time readers of detective fiction will recognize some common tropes from other exemplars of the genre, like the protagonist’s hard-boiled but ultimately moral and heroic nature and the incompetence of most of the police, and the fun twist is that Harry Dresden, the protagonist, is an honest-to-goodness wizard. The novelty of the plot in this example, then, is not the form of the plot itself, which is an established genre, but the twists and idiosyncrasies Butcher incorporates into it.

For Simondon, and Husserl before him, perception is not a passive act. All objects of perception are understood through the mind of the perceiver, and this mind is not just a receptacle for perception. The mind actively reaches out, “protends” toward new perceptions based to some degree on previous perceptions it has retained. As an illustration, think of a person standing on a beach watching the waves break. A person who knows how to surf sees the waves differently than one who does not, or one who is more interested in fishing than surfing. It’s not that the surfer sees “more” than the non-surfer, just that the surfer perceives different aspects of the same object that she knows how to look for and considers important.

After the initial perception, which again is not “pure” – there is no “pure” or non-judgmental perception – elements of the perception which were “protended” for, toward which the perceiver’s attention stretched, organize how those perceptions are retained in memory. The non-surfer may go home happy and feeling calm and peaceful because the waves only lapped gently at the shore that day, while the surfer may go home frustrated for the same reason.

Yes, but what does this have to do with (re)reading? As I said above, the first reading of a novel is new in the sense that, even given previous experience of the genre of novel one reads – or the experience of reading novels at all, this particular novel is new to the reader. We don’t know what happens yet, so we read on (or don’t.) It is easy to forget that even this first reading is not “pure” in the sense of non-judgmental perception. To continue with the example of genre fiction, The Dresden Files is obviously and immediately a detective novel, albeit an idiosyncratic one, so noir-junkies will “protend” expectations into the text that more casual readers of the genre might not, even if they recognize the presence of generic tropes. Even more basic, however, are the protentions inculcated in readers by our social frameworks that help us make sense of novels at all. The novel is a relatively recent form of literary creation. For Homer, for example, novels would probably not have made much sense, even if he could have read one, because they differ dramatically from the forms of literary production common to Homer’s time and cultural background.

On a second or subsequent reading of a text, the protentions one brings to a novel might become more clear. For example, maybe you read a book and told a friend about it, who then told you that every time a character stands up to do something in the novel he is described as “stretching his legs.” You didn’t notice this, and so you re-read the book with this claim in mind. Sure enough, you find to your dismay that this character does indeed do a lot of leg stretching. This example is somewhat prosaic, but it points to two important aspects of re-reading I think are worth lingering with.

First, perception is not passive and never “pure.” It can be “primed” to look for certain things and mark them where it may not have otherwise. I’ll talk later about how this can be used for making the world better, but it’s worth stopping a moment to consider the negative version of this kind of “priming:” conspiracy theory.

Again, this is blog post so I won’t go deep into the psychology or subjectivity of conspiracy theorists, but will only pause to point out that conspiracy theory is protention gone awry. The conspiracy theorist sees the object of their obsession everywhere, and any piece of information can be made to fit their understanding structured by this mis-protention. “Of course they’d say that, they’re X kind of person, in the pay of the Deep State, etc.” The problem here is not that the conspiracy theorist doesn’t see things “objectively.” Again, no one ever does. Rather, the problem is that they have hyper-extended their protention so they can never be wrong. At no point can they be brought up short and be required to rethink their claims or incorporate new evidence into a revised and necessarily partial (both in the sense of “incomplete” and in the sense of “interested,” like “I’m partial to”) understanding. For all they like to claim to be thinking, this is in fact exactly the opposite of thinking.

Second, perception can be trained and altered in line with one’s goals. An aspiring novelist, for example, might approach a novel she has enjoyed in the past with the intention to look for, that is protend her perception into, the stylistic and formal qualities of the novel rather than simply its plot and dialogue. She isn’t seeing anything that wasn’t there on her first reading, only actively looking for data in the same text that mean something different to her in line with her new goals. This is obvious to anyone who majored in English because they liked to read. Reading Frankenstein for pleasure is very different than reading it for your final paper.

It’s worth pausing here again to point out an important point. We protend into new perceptions constantly, whether consciously or not. We cannot “suspend judgment” completely, and have to be trained to do so even to a modest degree. If we could all magically see things “as they really are,” there would be no need for lawyers or negotiators. One of the possibilities re-reading allows is the opportunity to carefully consider and examine the protentions we bring to the object of our attention, and whether we want to continue using those protentions. This requires us to think carefully about what we are looking for, and, even more importantly, about why.

“Why” is the most difficult question, but in a sense also the most natural. We don’t do things for no reason. Humans are capable of intention and making choices in the world, a world which is of our own design. Death and taxes may be certain, as the saying goes, but these are not really the same kind of thing. Death is the great unifier. Everyone dies, and has always done so, regardless of where, when or how they lived. Taxes, on the other hand, require a whole host of other things to exist in order to make sense at all: money, the state, a sense of “civic duty” or responsibility, accounting, and so on. All of these things are produced and reproduced by humans and, because they are produced by humans, could be reproduced in other ways or ended entirely. This might seem obvious to some, but for others the idea that death and taxes have the same kind of certainty is an article of faith. Like conspiracy theories, claiming that the way the human-created world is is somehow “natural” inhibits thought, rather than stimulating it. For an example, consider the time and energy spent by Southern writers and politicians in trying to convince people that slavery was “natural.” A practice that we perceive with disgust was not only accepted but claimed to be natural not even two centuries ago.

This example should prick us to reflection then. What do we think is “natural” that is in fact part of the human-constructed world that could operate differently? And how could we make it that way?

Re-reading, then, is a useful way to illustrate a capacity humans have that goes far beyond just looking for hints at how to be a good novelist in a book one enjoys. By attending to our protentions and considering what we bring to a text and why, we can gain experience in performing similar acts of attentive consideration to the broader human-constructed world we live in. This is especially important in a time when media are reduced to “content” made to be “consumed.” To re-read a book, especially to re-read it with a particular goal in mind for a particular purpose is a weird atavism now. Sure, re-read it if you like it, but what are you looking for? Why make the effort? Just enjoy it!

(Note for another time: one consideration we might attend to is why the work I’m describing here, of reading and thinking critically, is not considered “fun.” Or why “fun” things seem to be the only things many people consider worth doing.)

Re-reading is an essential practice, especially in a world dominated by the drive of consumption. Many novels, television shows, movies, video games, and other media aren’t worth re-visiting, but those that are, ought to be. The critical faculties developed through the practice of re-reading may be all that stands between the hope of human lives worth living, and the possibility of precarity, penury, and nastiness, of lives of pure and thoughtless consumption, of lives without even a bad “why,” where our protending is simply done for us.

I may seem to be overstating the power of re-reading. It’s true, I probably am. But we are (always) living in the Kali Yuga, the time before the end of the world, and it’s worth starting somewhere.