Design a site like this with WordPress.com
Get started

Essay: Care and the Green Thumb

WARNING: If you have no patience for elliptical style, riffs and digressions, or etymological wordplay, best skip this post.


Problematic: What does it mean to have a “green thumb?”

For Heidegger, one properly acts through the hand. (Do note the singular.) Insofar as humans (which are not all Dasein, and, at least for Dreyfus, vice versa) have hands, we properly act. The hand distinguishes the human from the non-human in acting.

Of course, an immediate objection arises: what about the great apes? Or Old and New World monkeys? What about elephants, whose trunks are at least as capable of handling finicky bits as a human’s fingers? As Derrida argues pretty convincingly in The Animal that Therefore I Am, Heidegger’s thinking privileges humans over other species, thus inadvertently continuing a tradition that places humans, if not at center stage, then at least at the top of the playbill. Any attempt to identify and designate a specific difference between human and any given animal fails, on Derrida’s account, not least because one could always find examples of individuals that are not human doing things that, supposedly, only humans can do. Of course, DNA sequencing makes this trick even easier. I have a lot more common with a pumpkin than one might initially suppose. (A fact which I rather like. Pumpkins, when planted as part of a Three Sisters bed, provide shade and keep the soil cool and moist for the beans and corn. I’ve always felt more comfortable with support/maintenance roles – a point I will return to below. Besides, pumpkins are kinda round and squat, much like myself.)

For the moment, I want to bracket concern with differentiating humans from animals. While I find Derrida’s contributions useful and important, it nonetheless remains obvious to me that, even if one cannot clearly and permanently distinguish humans from species that are not human (and that this lack of distinction bears ethical ramifications), differences nevertheless persist.

Rather than the hand, then, I would look to the thumb, the means by which one (a human and a Dasein, for the time being) grips, encircles, takes hold of. In German, a concept is a ,,Begriff,” reminiscent of “gripped.” One encircles with a concept, creates a barrier or boundary (or perhaps a membrane), a place to hold on – a grip. In Heidegger’s “A Triadic Conversation,” the character of the Scholar most clearly represents the power of the ,,Begriff,” of the concept as boundary.


[A brief riff, if the reader will indulge me. Humans act through the hand, but this does not apply to all humans. Even bracketing for the moment individuals with impairments or motor difficulties, at a much more basic level the hand does not represent our originary means of “handling” things in the world. How does a baby interact with the world? By putting things in her mouth. One often reads “human” to mean “adult human” (historically also “white,” “male,” and “free” or “property owner.” But how did those adults get to the point of using only their hands to interact, with the mouth relegated to food, drink, medicine, stimulants, and (sometimes) the mouths and genitals of others? The mouth takes in, and indiscriminately, until the hand mediates the encounter.]


The longest of Heidegger’s “conversations” (collected in Country Path Conversations edited and with an excellent introduction by Brett W. Davis) takes place on, you guessed it, a country path. Three conversants, a Guide, a Scholar, and a Scientist, take up again a conversation they had left off a year earlier. As the conversation carries on, the Guide seeks to convince the Scientist that, contrary to popular belief, one can describe science as an applied technology, rather than the other way around. The Scientist, a physicist and positivist, resists these ideas, remarking that the Guide’s words make him feel “groundless” or dizzy. For the Scientist, the Guide is LSD in the water. But not so with the Scholar.

As the conversation ambles on, the Scholar tries to find ways to identify and encircle the Guide’s words. Some statement reminds him of Leibniz, or Spinoza. Unlike the Scientist, whose disciplinary specificity and (necessary!) rigidity make him an easy window to smash, the Scholar has a much more flexible immune response. He enlarges the circle of a concept, broader and broader, until it can, potentially, fill all of space. The Scholar, one could say, has a much firmer “grip.”

The range of the Scholar’s ability to “grip” novelty into his existing handhold makes him (an assumption – we don’t actually know from the text) a tougher nut to crack for the Guide (whom I think one can safely say represents Heidegger more or less in earnest). To the Scholar, anything the Guide says can be identified with an existing concept and fit into an existing schema. Resemblance oozes subtly into identity.

I have, of course, a literary analogy for this phenomenon. In William Gibson’s Pattern Recognition (probably his most interesting novel, in my opinion), the protagonist Cayce Pollard (about whom more in this post) travels from New York to London to Tokyo to Moscow, and each time finds herself playing a kind of game where, when faced with difference, she tries to fit it into an existing schema. Parts of London (which she calls the “mirror world”) are “really” like New York. Parts of Tokyo are “really” like London. Anyone who has traveled extensively, especially to big cities, will recognize this pattern of behavior, a pattern made increasingly understandable (if no more laudable) by the homogenization and leveling of global culture. For me, Shanghai “really” was just like Paris until I turned off the main thoroughfares and found myself firmly back in China again. But then I passed a Burger King, entered a Starbucks, and placed an order in English, at which point I could have found myself pretty much anywhere.


[I beg the reader’s indulgence for another riff. Starbucks, it seems to me, best represents the homogenized no-place subsuming cities large and small. I have visited Starbucks locations in several countries on three and a half continents, and each only stands out as a separate place in my mind because of its differential surrounding context. For example, I visited one in Shanghai located inside a huge multi-layer mall that I found garish and too bright. It looked just like all the “nice” malls I have ever visited, but something felt a bit “off,” like how UHT milk from a box doesn’t taste like fresh milk. Another Starbucks, in Mexico, I remember because the inside of the shop was too intensely air-conditioned, leaving the glass door to the outdoor seating area covered in a thick layer of condensation. It gets hot on the Yucatan Peninsula.

One might respond that McDonalds would serve as a better example of homogenization. I would not disagree. Initially I would say that McDonalds has more of a functional or even “low class” set of associations and homogenizes “from the ground up,” but that doesn’t exactly work since, for example in China, one can buy fast food from street vendors for much cheaper. McDonalds isn’t haute cuisine there, but it’s not a cheap source of fast and convenient calories. Again like Cayce Pollard, whose usual “allergy” to haute couture brands bothers her less in Tokyo than it does in London, context matters. Nonetheless, I think that Starbucks, which I associate with people tap-tapping away on MacBooks, better represents the digital and aesthetic homogenization of culture. Maybe a homogenization from the inside out, from the aspirational and downwardly mobile middle- and consuming classes that serve as insurance against overproduction. A smoothing of culture, as Byung-chul Han puts it in Saving Beauty. To put it a bit vaguely, a McDonalds anywhere feels like more of a “real place” to me than a Starbucks anywhere.]

Now, I don’t mean to suggest that making comparisons or finding similarities is some kind of problem in and of itself. You need some existing schema to apprehend a new idea, at least initially. Learning the grammar of your own native language makes learning a foreign one easier (or at least less totally baffling). The problem arises when all novelty is “fittable” into one’s schema ahead of time. We don’t live in a modular world, where pieces can go together in various ways, but are nonetheless standardized. This isn’t Legos. Heidegger’s Scientist needed his rigid positivism not only to actually conduct scientific research, but also to allow for the possibility of going beyond his scientism. Byung-chul Han writes (somewhere, I don’t have the citation right now) that knowledge differs from data in that knowledge is gained against resistance. The Scientist’s rigidity creates precisely such resistance. The Scholar’s erudition, on the other hand, more amorphous and loose than the Scientist’s, runs the risk of souring and overrunning the entire world. Like a gas, there’s nothing to push back against. Every Starbucks looks like all the other Starbucks, even if the layout and specifics differ slightly. If you’ve seen one Starbucks, you can probably literally imagine them all.


Speaking of Starbucks, where they wear green aprons, I now sense the approach of the point of this excursion, like a change in the wind. To return to the green thumb.

The thumb serves to grip, to encircle, to make concepted – ,,zu ‘Begriffte’ machen.” As we saw with Heidegger’s Scholar, this gripping broaches the possibility that, as Ecclesiastes would put it, “there is nothing new under the sun.” Everything strikes one simply as “like” something else. One cannot any longer imagine novelty so new that it passes through to trauma.

The green thumb, then, a subspecies of thumb as it were, “grips” and encircles. But now, we must ask: what does it encircle? How hard does it grip? Does the wrist remain loose and flexible, or taught, tight, under pressure? Do the muscles of the forearm suffice to accomplish the hand’s goal, or do you have to put your back into it and slip a disc? Does the grip involve all five fingers? Both hands? (Heidegger, to the best of my knowledge, does not ask or answer these questions. Part of his problem with typewriters has to do with one properly acting “through the hand.” Of course, as Don Ihde points out, this is a clear indication that Heidegger never learned to type with any proficiency.)

A green thumb means its holder (its haver? its bethumbéd?) can keep plants growing and alive. Many people described as having “green thumbs” can, of course, tell others in explicit terms how to care for plants, but their ability nonetheless continues to strike others as peculiar and impressive. And even they themselves cannot exhaustively describe their own capability. Why? Because “having a green thumb” does not mean “knowing all about plants and being able to express that knowledge systematically and precisely in symbolic form.” To those poor souls who always kill their succulents, the “green thumb” is magic , something almost preternatural of which they despair of learning. But this is a mistake.

The meaning of a “green thumb” really comes down to this: a particular way in which the green thumb “grips” the world. It is not a way of knowing in the sense of exhaustively and systematically articulating symbols through recall, but rather a way of comportment, a mode or key of being.

Consider an analogy with your native language. We say that one “knows” one’s native language, but we really mean something more like one lives one’s native language. (To put it in Heidegger’s terms, “language speaks us.”) Aside from sometimes struggling to find the right word, or occasional stumbles, one does not need to remember anything to speak one’s native language. Don’t believe me? Spend six months working diligently but not too intensely on Duolingo (any totally unfamiliar language will do), then take a trip to a place where that language is the native language of most of the population. If possible, try to avoid big cities where you are more likely to encounter others who can translate for you.

What will happen? Well, Duolingo works pretty well, so you’ll get up to speed on basic terms and meeting basic needs quickly enough. But beyond that, you will find yourself thrown for a loop. You will find, in your stumbling attempts to navigate the world and interact with others, hat how you communicate with others plays a significant role in forming both who you are to others and to yourself. The most difficult (and intimidating) part of learning a new language is the plummeting feeling of having to learn how to be yourself again.

A green thumb – or an eye for photographic composition, or an ear for musical composition, or a good arm in baseball – works the same way. One doesn’t “have” a green thumb or “know” a green thumb. One is a green thumb. That is, the green thumb serves as a descriptor of a mode of being in the world, one that cannot be exhaustively expressed because it does not come after the one doing the being – it is the being.

Another analogy might help. I do not know how to surf. If I accompany a surfer to the beach and we both look out onto the ocean, she and I will see different things. Not “literally” (at least assuming we have similar levels of visual acuity, etc.), but rather in the sense that the surfer will be able to tell if it’s a good day for surfing, and I won’t. She might be able to explain some of how she knows this, but not all of it. And, unless my being already exists in some sense “adjacently” to the being of a surfer, I may not even understand the things she is able to explain. However, if I begin learning to surf, if I practice surfing, if I become a surfer, then maybe someday she and I will be able to once again walk onto the beach and both see whether the waves are good that day or not.

The green thumb works the same way. One has to learn how to be such that one has a green thumb. While this learning must incorporate explicit symbolic knowledge to some degree, the real work, the real learning, and the real change in being comes from the doing, and from the becoming.

The green thumb, as a thumb, grips, it creates and holds concepts of the world. But the green thumb differs from, for example, the Scholar’s pre-configured means of expanding his grip, precisely because plants are not symbols. The mimosa tree in my front yard is, if the conditions are within a certain range, gonna mimosa. Period. I can help it along, shelter it, take care of it, feed it and water it, but fundamentally, the plant is doing its own thing. The green thumb “grips” the plant, but it can never do so completely, simply because the plant does not allow itself to be fully symbolized. It is outside of the human in a significant sense, and even an exhaustive knowledge of horticulture does not preclude the possibility of plants dying for what appears to be no reason. For all that one’s symbolic knowledge of plants can expand and expand, it eventually founders on the brute reality that the plant is not up to you.

And here we see the most salient facet of the green thumb. Insofar as it does “grip,” conceptualize, and encircle, it does so in the knowledge that this is only ever a kind of loose grip, a conceptualization that may prove useful in some cases, but ultimately fails to fully encircle its charge. It is a grip of care, the careful grip with which one holds a child’s hand while crossing the street. This is not a grip one can learn except existentially. By doing. And in so doing, by changing not just what one knows, but who one is.

Advertisement

Essay: That’s exactly what they want you to think.

(Formerly posted as “Report from the Workshop: 04/29/2022,” but I decided it’s much too long for a report and should stand on its own.)

Semesters are like volcanoes: they simmer and simmer for a long time without anyone thinking much of it, and then they decide one morning to violently explode.


Yesterday I submitted an outline that resulted from a semester-long independent study on Heidegger’s thinking of ontological “death” and its ramifications for education. I started with education, but somehow ended up creating a pretty ambitious research project involving existential death, conspiracy theories, and the epistemological necessity of vulnerability.

I say I started with education because, to be honest, I’ve grown tired of thinking about education. Problems in education increasingly strike me as consequences of more general (and therefore more invisible) social, technological, and epistemic limitations. Talking about “education” on its own seems more and more like missing the forest for the trees.

Of course, as a (sometime, amateur) Marxist, I shouldn’t find this surprising. The depredations of capital flow affect all aspects of social life, although differentially in different domains. I’m finding myself gravitating more and more toward what Heidegger calls Gelassenheit, “releasement,” a term he cribs from the German mystic Meister Eckhart and reappropriates for his own use. Where Eckhart would advocate “releasement toward God,” Heidegger would advocate “releasement to the things (in the world).” This allows one to “return to oneself” and see one’s existential situation anew and (potentially) with greater clarity. It’s also the exact opposite of the way that networked digital media platforms want people to behave. Thoughtful, meditative behavior doesn’t play well on platforms that run on a fuel of “engagement.”

[Upon reflection, it strikes me that one could read Gelasssnheit as a kind of “blissed out” disconnection from the world. I don’t think Heidegger intends this reading, although it’s not difficult to see how one could make this mistake. Rather, and I think this is important, releasement” for Heidegger is releasement to the world and how it shows itself to us. That is, one’s usual and unthinking apprehension of the the world is “broken” and then “reset.” I hurt my knee about a month ago, and though it’s much better now, it still feels different than it did before – I actually have to think about walking and pay attention to where I set my foot. I’m imagining releasement as occasioning something similar.]

For a while now I’ve been mulling over the idea that we humans, especially but not exclusively those of us in the Global North, have been “domesticated” by a product of our own invention. Networked digital technologies are “pharmacological” in that, on their own, they don’t have a positive or negative valence. Two aspirin help a headache. A bottle of aspirin, however, will kill you. It isn’t exactly a question of quantity, but rather of distribution and following impulses. Every time you get mad at something you see on Twitter and “clap back,” Twitter is literally (and I mean this in the dictionary sense, not for emphasis) making ad revenue from the reflexive operation of your neural pathways and fight-or-flight reflex because the more you stay online, the more angry and invested you get, the more fucking ads you are exposed to. It’s like if you “worked” in a factory where every time the doctor made your knee twitch with that weird hammer the hospital administrator got money from the hammer manufacturer. (Maybe that is how doctors work, I don’t know.) But that’s an essay for another time. Right now I want to talk about serendipity.


As I was typing up my outline to turn in I realized that several of the books I’ve read “for fun” this semester have borne direct relevance to the social epistemological questions I’m beginning to pose. This happens to me pretty regularly, actually, and it’s probably just a case of apophenia, seeing patterns where there aren’t any. Of course, if the universe is one unified thing and any individual and their sensory apparatus is a distinguishable part of it that, nonetheless, follows similar rules as elements of the universe at much larger and much smaller scales, then who is anyone to say that there aren’t patterns? Maybe we just need a different point of view.

{The sentence starting with “of course” in the above paragraph is dangerous. The astute reader will understand why. If you don’t understand why, just recognize that I was, and again literally, fucking around up there.}

Let’s talk about books. First, I started reading a collection of Philip K. Dick’s short stories in January. I keep the volume by my bed for nighttime reading, so I haven’t made a ton of progress through it. But even Dick’s weaker offerings bear the distinctly clammy and metallic odor of paranoia. His VALIS trilogy, written after a kind of mystical experience he underwent and then tried to work through in his Exegesis, features a millennia-long conspiracy in which the Roman empire never died and continues to enslave humanity. Wild. In Dick’s fiction, nothing is as it seems, and there is often no way out. (Incidentally, I appreciated the most recent Matrix movie for driving this point home. I’m a congenital contrarian, so I love that film because everyone else seems to hate it, but I also love it because Lana Wachowski strikes me as dedicated to not infantilizing her audience with a clearly spelled out “message.” Just like the previous installments in the series, the “moral” of the story is: “take a minute to think, you philistines!”)

I also began Robert Shea’s and Robert Anton Wilson’s Illuminatus! trilogy, a send-up of acid-trip political paranoia from the 60s and 70s. The narrative structure is experi-mental (see what I did there?) with point of view changes galore and makes reference to a wide variety of very specific conspiratorial schemas. The intention is clearly to satirize paranoia, but the novel does so in a way that leaves the reader unsure of just what the “real story” might be. My opinion, for what it’s worth, is that this uncertainty regarding the “real story” is good. Since Descartes, philosophers have looked for “absolute knowledge,” knowledge we could know without a shadow of a doubt that we knew. Personally, having read the bit of Descartes’ Meditations where he gets to his famous cogito, I think he may have been trolling. In any case, the spectre of “absolute knowledge” looms large and nastily. For a Biblical literalist, any challenge to a truth claim made by the Bible potentially throws the whole thing in question. Hence the literalist’s jumping through ever-more-spurious hoops to save the phenomenon. But here’s the problem: this kind of face- and phenomenon-saving behavior is now characteristic of everyone. Why can’t things be “true enough?” Or, saints preserve us, fucking metaphors?

Umberto Eco’s Foucault’s Pendulum, which I just finished the other day, actually makes that last point explicit. It’s the story of three editors at a publishing house who basically use a computer program (named after a medieval Kabbalist) to invent a global Knights Templar-themed conspiracy after encountering a strange Colonel with what he claims is a decoded Templar message. At first it’s a joke, designed to poke fun at the spurious dot-connecting done by the “Diabolicals,” enthusiasts of the esoteric who constantly submit manuscripts “revealing” hermetic and conspiratorial secrets. The editors are hard-headed skeptics, with what Eco describes as a kind of hard-headedness apparently congenital to the Piedmont region where they come from. Over time, however, that all starts to change. As the Plan becomes more and more real to them, and the stakes start getting higher, the narrator Casaubon reflects that he and the others have, precisely, lost the ability to think metaphorically or figuratively. The novel is deeply tragic, even though it is, like Illuminatus, intended as satire.

I’ve often thought that fiction is a better vehicle for some ideas across than non-fiction (especially in philosophy). Genre fiction like sci-fi or thrillers seems especially useful to me, and partially because it isn’t (or hasn’t historically) been taken seriously. Crichton’s Jurassic Park, for example, makes what seems like a pretty persuasive argument for at least some amount of caution in biological engineering, but when Jeff Goldblum’s lines get turned into memes, the thrust of the argument gets obfuscated.

Foucault’s Pendulum has been described as “the thinking [person’s] Da Vinci Code,” and I think that’s right. The point of the novel is to show that the logic of conspiracism leads to an abyss. When everything can in principle be connected but there is no nuance, no sense of when and which kinds of connections are appropriate, one falls into the trap of having no choice but to try and become omniscient. This is impossible (for a human being, anyway), and so omniscience comes to mean imprisonment in a miasma of one’s own epistemological overindulgences. It doesn’t even make sense to call it a “web” of connections anymore because a web has a particular valence – it isn’t arbitrary. While Eco could have probably made this point quite clearly in an essay (or, haha, a blog post), the novel’s form, that of an upper-level airport thriller, gets the reader in the guts in a way that making claims and articulating arguments does not.


“Interesting,” the reader has by now mumbled to themself a few times. “So you just happened to read several books that all had to do with paranoia and conspiracism, and then decided to do more research on this phenomenon? Seems pretty straightforward to me.”

I agree, actually. I’m not trying to argue otherwise. Rather, I’m trying to demonstrate that there doesn’t need to be a straight line from point A to point B in all cases, and even where such a line does in fact exist, one might not be able to perceive it until after the fact because, wait for it, the line itself might not exist until after the fact. (Hegel, whom I haven’t read, calls this Nachtreglichkeit, “retroactivity.”) That is, there’s a difference between conspiracy and serendipity, but sometimes this difference is hard to perceive. Either way, one should wonder, “does there need to be a reason?”

The final book I want to talk about, William Gibson’s Pattern Recognition, deals with a kind of serendipity of perception and offers a potential corrective for the pathological drive to omniscience. Probably best known for his earlier Neuromancer, Gibson basically invented the cyberpunk genre. Pattern Recognition, however, doesn’t exactly fit that mold. There are computers, of course. The plot actually comes to revolve around a series of film fragments of unknown provenance unearthed on the (2002) internet but the digital technologies and the world of the setting are all “real” and recognizable. The novel also has to do with, as the title suggests, pattern recognition, and seeing patterns where there aren’t any. But over the course of the novel the reader watches protagonists who don’t gain victory over the world of networked technologies and final, full understanding, but rather find a kind of catharsis in not knowing for sure.

The protagonist, Cayce Pollard (pronounced “case,” though she was named after the American mystic Edward Cayce (pronouned “casey”)), works as a freelance “cool-hunter,” roaming urban streets on the lookout for the Next Big Thing in fashion. She has a strange and somewhat uncomfortable ability to “sense” whether a logo will “work” on the market or not, as well as a complete intolerance for brand names and logos which she describes as a kind of “allergy.” Gibson makes a fair bit of hay over, for example, Cayce’s clothing – tags carefully cut out, the pseudo-brand of a “casio clone” watch sanded off. (Many of these descriptions read like museum copy twenty years on, which I think adds to the novel’s interest.) Cayce doesn’t know how she does what she does, only that it works. When she is hired by Hubertus Bigend, a Belgian businessman in the mold of a proto-Elon Musk, Cayce finds herself connecting her business of evaluating logos with her passion for finding whoever is making the mysterious online footage. Think Indiana Jones but it’s a black-clad woman from New York who does Pilates in the early 2000s. (Just to be clear, this description is intended as a positive appraisal of the book.)

While parts of the novel now feel dated (no smartphones, people communicate by calling and emailing rather than DM’ing, etc.), it nonetheless remains eerily resonant. The reader learns, about halfway through the novel, that Cayce’s father, Wingrove Pollard, worked as a security contractor for American intelligence services securing embassies. Win disappeared on the morning of 9/11/2001, but there has been no proof positive if he is dead or not. The novel takes place soon after 9/11, and the trouble with Win’s undeath has to do with his estate – Cayce and her estranged mother, who lives in a kind of hippy commune dedicated to scanning rolls of tape for so-called “electric voice phenomena” (EVP), cannot claim Win’s inheritance until he can be proven to be dead. But that isn’t really concerning to Cayce. Rather, the really concerning thing is not knowing.

There’s a lot of not knowing in this novel, and I would argue that the catharsis Cayce eventually reaches (which I won’t spoil) serves as a useful model for how we ought to live now. 9/11 has faded into the background of the American psyche over the last twenty-plus years (although not from American politics, unfortunately), but we still find ourselves living in a world beset by bad things happening for reasons opaque to us. The rush to claim that covid-19 was a Chinese-developed viral weapon, for example, tries to find an “explanation” for something that, insofar as it posed a threat to global health, at least initially simply had to be dealt with. I think it likely that scientists will know for certain where and how covid came from in my lifetime, but I don’t think we know now. That doesn’t stop speculation, though, driven by the pain of not knowing, of feeling the rope slip through our fingers as we hang over the abyss, unsure whether anyone will come and save us.


Pattern Recognition presents the reader with two questions that eventually merge into one for the protagonist: “who makes the mysterious videos?” And, “what happened to my father?” One of these questions is, eventually, answered. The other, however, is not. Or not completely. Not beyond a shadow of a doubt. But even with this possibility of doubt, Cayce finds a way to live. To “pollard,” in horticultural terminology, means to cut down a tree, leaving a stump from which new, straight branches will sprout. It’s a means of sustainable forestry because a few pollarded trees can produce lots of wood for quite a long time, rendering cutting down other mature trees unnecessary. One could read Cayce’s last name as reflective of the myriad possible coulds she encounters. There isn’t a main trunk to speak of – the postmodern “proliferation” has replaced the late-modern “grand narrative.” Coming from the position of Descartes, or later of Kant’s sapere aude!, “dare to know!,” the only choice in a world of massive complexity and scale seems to many of us to try, like the editors in Foucault’s Pendulum, to make sense of it all. The desire to become omniscient, to become God, to become identical to the universe itself, is a desire not for immortality and certainty, but for un-death and the constantly grinding need to continue suspecting. Either it all makes sense, or none of it does, says the inheritor of an Enlightenment grown malignant, and the abyss calls louder, louder.

What saves us from the abyss? Well, at least from my perspective, certainly not God. Neither will History, Justice, The Next Generation. The arc of history only bends toward justice if it is made to bend. The universe on its own seems supremely unconcerned with the whole thing, like a dandelion blowing in the breeze. We’re on our own and, like Cayce Pollard, unsure of what’s what. But also like Cayce Pollard, we’re not each of us all alone. Pollards produce myriad new growths from a single stump. We can still help each other, even if no one person finally “knows the score.” And we can also keep each other honest. Not necessarily by arguing, but simply by wryly asking, like the skeptical Piedmontese editors in Foucault’s Pendulum before they succumb to their own game, “you really think so?”

It would be all too easy for me to look at my reading this semester and think, “oh wow, I guess it’s my destiny to write about conspiracy theories since I read these books without realizing it!” But then, when I hear myself say this out loud, the other me that is identical to me but from further along the timeline, grins and says, “you really think so?”

Essay: Don’t Write Down Your Nose At Others (A Screed)

[A note to the reader: “screed” seems an accurate descriptor for this essay after my having written it. But a screed is not necessarily incorrect, just impolite. Since this is a personal blog, I make no apologies. Nor do I give specific examples.]

[Another note to the reader: I wrote this essay several weeks ago and have sat on it for a while because I don’t quite know how I feel about it after getting it all out. I still think I make good points here, but the essay is a bit repetitive. I’m posting it anyway because I haven’t posted in a while. Maybe I’ll come back to it later. -jk ]

Writing as a philosopher, “theorist,” “thinker,” etc. does not give one license to write like a jackass. I find myself increasingly irritated and impatient with “thinkers” who write from on top of the mountain of “theory,” where all the smart (read: “good,” “informed,” etc.) people live. These writers take the tonal equivalent of people from New York City or San Francisco who assume that others know all about the geography and administrative subdivisions of their city. No, I have no idea where “Queens” is, nor do I know what, if anything, being from there means. “The Bay Area” is another one. Which Bay? The Chesapeake?

Don’t worry: I’m not pulling a JD Vance and trying to pivot from college-educated cosmopolitan to straight-talkin’ yokel, although Vance’s cynicism in his own recent politically-motivated pivot is so astounding as to almost be impressive. I don’t have a problem with dense, abstruse, technical language. (Someone claiming to be “telling it like it is” can be guaranteed a skeptical eyebrow-raise from me. Thanks, Derrida.) In fact, I don’t even really have a problem with the claim that some ideas are so complex or counterintuitive or whatever that the text explicating them needs to be difficult. While I would argue that many conceptual difficulties can be more or less cleared up by trying to explain one’s ideas to a bright middle schooler, in principle I don’t have a problem with some texts simply being difficult. Anyone who has read and enjoyed one of Stephen King’s novels featuring Maine accents so thick you can hardly understand them has encountered an analogous phenomenon to some “difficult” theory. Readers of pulpy sci-fi or multiple plot-line “high fantasy” are in a similar boat.

So, what’s my problem, then? My problem is “theorists” or, even better, “thinkers” (ughhh) that don’t write difficult prose, but rather knowing prose, prose that will be read and appreciated (only) by those whose noses are attuned to the subtle aroma of rare discursive ambergris. And not only will this prose be read and appreciated, but part of the frisson of its appreciation is the disavowed knowledge that other people aren’t getting it because they aren’t as well-read as me and that I am, therefore, in some vague sense superior to them.

“Difficulty” is not the issue, nor is technical language or expecting a reader to do their share of interpretive work. The issue is the sly wink, the little nod of recognition that the reader and writer are, already, in the same club. Even more fundamentally, the members of that club refuse any attempt at trying to open membership to others not already a part of it. It’s “not their job to educate you.” (Yes, in fact, it is.) These writers make little attempt to explain their positions and give context to help bring their readers more fully into their discursive complex. They don’t seem to either be struggling to present the material or have struggled to think about it. When it comes to those not “in the know” – even before reading the book! – they simply shake their heads or shrug. Hélas, they say. What’re you gonna do?

In sombunal cases, “knowing” writing bears a resemblance to a bad habit I often see among highly-educated liberals: using “ignorant” as a slur rather than as a neutral descriptor. For these well-intended people, others who are not like them (i.e. anti-racist, anti-sexist, “woke,” cosmopolitan, desiring of adherence to politeness and “sensitivity”) are not like them, ostensibly, because they are ignorant. They don’t know enough. If they only went to grad school or read a damn book, they’d see the truth, just like the “right-thinking” liberals! While I share many of the positions these liberals espouse, at least the social ones if not their milquetoast economic stuff, I part ways with them over their refusal to admit the creeping condemnation that rides along like an invasive species with their noting that others don’t know fact X.

For the “knowing” writer, there are certain home truths (even when that writer is denying the existence of capital-T “Truth”). These truths are not up for question because in most cases they are not even made explicit. And, more importantly, one should demonstrate a certain affect about these unacknowledged truths. Those in the know are the “good” people, predestined by God in a latter-day literary Calvinism to paradise, while those unfortunate not count themselves among the elect have no hope to escape Hell. The reader not in the know, for the “knowing” writer, is a benighted rube and will, hélas, just have to stay that way, I guess. What’re you gonna do?

In many cases “knowing prose” isn’t marked by anything direct or explicit in the text. Rather, the “knowing” haunts it. There’s something in the tone, or the little parenthetical jabs, or the diction. To put it simply, “knowing” prose gives off a “vibe.” Talking about things in terms of “vibes” strikes me as a phenomenon worth considering. Complain all you want that this is an imprecise Zoomer re-appropriation of hippie slang, it still seems quite useful to me. “I just get a bad vibe.” You feel it in some peripheral part of your perception, like the little nudge you get to grab an umbrella before you leave for work, just in case. I wouldn’t argue that one can live on vibes alone – you need an argument, too – but vibes nonetheless serve as a useful starting point. And attention to the “vibe” of a text is precisely what leads me to frustration with such “knowing” writers. They have no sense of the nasty “vibe” they give off.

My internal Freudian speaks: “yes, but could your frustration not really be a projection of your own habits and tendencies onto a text?” Of course it could, and it probably is to some degree. I live in the same world as these “thinkers,” or at least in an adjacent zip code. I am definitely guilty of looking down my nose at others, and of doing so because they aren’t in the know. And yet. Two further points come to mind:

  • Does projecting onto a text necessarily disconfirm the observations in that projection? That is, does the possibility of my irritation stemming from projection mean, by itself, that I am therefore wrong in my observations? Could it not be that my observations are both born of projection and accurate, at least in some cases?
  • Does the fact that I have no doubt both looked down my nose at others and projected my own bad habits onto a text mean that I must do these things, or that I will always do these things? One would think that people might grow and change – otherwise no one raised in a racist society could become anti-racist. Despite the hemorrhaging of church membership and attendance, the Anglosphere sure seems to still pump out a fair number of Calvinists.

The “knowing” writer commits what is for me a cardinal sin in exposition: discounting entire groups of readers from the get-go as a way of further defining their own sense of worth and sufficiency, and of doing so at the expense of everyone not in the club.

I want to make something perfectly clear: I do not intend to argue that malignant, willful ignorance does not exist, or that non-college-educated people have some kind of “authenticity” which the college-educated have lost. I likewise do not want to argue that ignorance of particular facts makes one see more clearly. Learning about biology or ecology, for example, will (often) change one’s mind about how things are, hopefully in good ways. Rather, I want to point out that what I’ve called “knowing” prose does both the writer and the reader a disservice by alienating them both even further than they already were from others they assume not to be “in the know,” and does so without any basis in facts on the ground. They aren’t alienated from readers who will react antagonistically to their writing, or people who have no interest in it, but intelligent, sympathetic readers who are simply not (yet) playing the same “language game” as that of the “knowing” writer. Writers should write for a specific audience. But to structure that audience on the basis of a prelapsarian predestination to benightedness and the hellfire of “ignorance” hurts the writer in the end, and not least because it shrinks their potential market share.

Consider: let’s say you know something. Something important and useful. You want to write about it. Writers, as far as I know, want to write to explain their ideas to others, to engage with others and convince them of something or show them something they hadn’t seen before. The “knowing” writer does all this, at least to some degree. However, the “knowing” writer is not, deep down, actually upset that others don’t know or care about what they know and care about. If everyone read their book, gave the ideas some thought, and then adopted them, the “knowing” writer would no longer be special! To actually communicate their ideas and write to others effectively, the “knowing” writer must give up that extra little spurt of dopamine they get every time a benighted rube gives them a blank stare or asks a too-basic question. And in this day and age, who will willingly give up free dopamine?

That the people who write in this “knowing” way often identify or are identified as “radical” thinkers is especially egregious. I won’t deny that I have probably read more books than many people who didn’t go to college, but that just means that I have more work to do in writing for others. (Note even here one of the assumptions that inform “knowing” writing. For all I know, my neighbor who worked as a house painter for decades has read substantially more, and better, than I have.) What good is my erudition and knowledge if I don’t use it to the benefit of others who lack these things, or who have similar levels of erudition but outside of my field or area of interest? Why would I want to have a sense of self so deeply dependent upon there being others than whom I am “better” in some vague sense? Isn’t such superiority the logic of the “white man’s burden?” Of course, the “burden” borne by the white man is a scam – there was/is no intention of making good on any promises to improve the lives of those counted by the white man among the burdensome. Even leaving aside for the moment the question of what the white men think counts as “improvement,” the fact remains that being the one to heroically bear the burden feels much better than working toward solving the problem that led to one’s shouldering the burden in the first place. Having one’s cake and eating it, too. Martyrdom without all the nasty dying bits.

Last week [a month ago] I started (and stopped) reading Günther Anders’ Philosophy of Technology by Babette Babich. I was excited to hear about the book and actually requested that the UNM library buy a copy when it became available. Günther Anders is one of the overlooked thinkers of technology in the 20th century whose works, as far as I know, are still not translated into English. Since I don’t read German yet I was excited to see a philosophical biography and contextualization of Anders’s work that, I hoped, would make his work easier to read once my German is up to snuff. While I’m sure reading the book would help me approach Anders’s works with fewer unnecessary hurdles, I don’t think I’ll finish it. Not because of problems I have with Babich’s project in general, but because of her writing. She writes as someone “in the know,” someone willing to take on the hard work of thinking about the things that “really matter” and that are vital to our time “now more than ever,” and to do so from a position of barely-concealed scorn for anyone not likewise bearing this romantic burden.

Her introduction starts with a meditation on Anders’s habit of working from home (he never held an academic appointment), comparing it to the social changes due to the COVID-19 pandemic with deep-sounding musings on “home-work,” etc. While there’s a version of this idea that makes an interesting point, her way of expressing this meditation positions her as someone “in the know,” someone who understands the “real stakes” of social distancing, wearing a face mask in public, and working from home. As though the difficulties, frustration, and confusion of the pandemic were not by now felt bone-deep by everyone. She writes like an unselfconscious parody of a university professor, with diction that would read as a bit stilted and too-flowery if it weren’t so ridiculous. Even more than Heidegger, whom I would argue is the locus classicus for “knowing” writing, Babich is clearly “in the know,” and wants to make sure you know it too, maybe even more than she wants you to understand Günther Anders’s work. Even Nietzsche had some tact and decency. For all his claims that the readers had not yet come who would be able to read his books, he at least clearly suffered from his writing.

I won’t give examples of her “knowing” writing here because I don’t want to read any more of her book (and, on a personal blog, I don’t have to!) I don’t mean to pick on Babich in particular, her book just had the misfortune of serving as a nucleation point for subterranean grumblings I’ve registered pretty much since starting grad school several years ago. She is definitely not the only “knowing” writer I have encountered.

To conclude my screed, one more differentiation. I do not think the “general reader” exists. And, if they do, they are probably not particularly quick on the uptake. One cannot and should not write for “everyone.” This, in fact, does “everyone” a disservice. Anyone making a good argument will have a specific audience, including detractors and antagonists. If you don’t seem to have any enemies, double check your argument because you didn’t make it well enough. In contrast to the “knowing” writer, the honest writer is aware of their antagonists and takes them seriously if and only if those antagonists return the favor. Those unwilling to take your ideas seriously, even if only to argue against them, don’t deserve your time. But to then take the “knowing” stance and look down on them makes you even less worthy of being taken seriously. The “knowing” stance demonstrates nothing more clearly than one’s own weakness. Iron sharpens iron. To paraphrase Nietzsche, there’s nothing like a good enemy, but to the “knowing” writer is about as desirable as a hole in the head.

Report from the Workshop: 11/14/21

Currently reading:

  • All Things Shining, by Hubert Dreyfus and Sean Dorrance Kelly
  • Children of Dune, by Frank Herbert

Currently writing:

  • Mostly sketches for medium- and long-term projects
  • Gathering pieces to submit to the UNM literary journal
  • to-do list entries that say “work on papers”

Currently thinking about:

  • The anthropology and psychology of shame (for my Nietzsche course)
  • Writing and what it discloses as a practice for Heidegger (for my Heidegger course)
  • Why “being a writer” irritates me
  • A sculptural practice consisting of pieces made entirely of 1) office waste paper and other paper product leavings, 2) office supplies you can get at a drugstore or hardware store, and 3) packaging detritus

This week was long. After last weekend’s staycation, I had to come home early on Friday to lie in bed all afternoon. I was exhausted.

Despite my exhaustion, I’ve been reading Dreyfus and Kelly’s All Things Shining, a kind of pop-philosophy “self-help” (?) book that’s been on my list for a while. I’m planning one or more essays on this book, so for now I’ll give my initial thoughts. But first, some context.

For a while now I’ve been wondering whether the hallowed halls of academe are actually a good place to do philosophy if, by “philosophy,” we mean something like “the love of wisdom.” Contemporary philosophy is often thought of as a purely speculative discipline, but this would seem to betray its earliest exponents – every one of the classical schools of Greek and Roman philosophy promised that its version of the “love of wisdom” did something. One gained happiness, or freedom from perturbation, or an accurate understanding of nature and how to live in it by practicing a particular school’s philosophy.

This isn’t to say that contemporary academic philosophy doesn’t “work,” or that it doesn’t seep out from the poorly sealed foundation of the Ivory Tower into the culture at large. It does, but often badly. A good example of this warped seepage is the idea of the performativity of gender pioneered by Judith Butler in Gender Trouble. The popular conception of this idea runs like this: gender is conventionally defined. As such, one isn’t a man, woman, etc. Rather, one performs these roles. That is, I “play the role” of a man. This understanding suggests that, since I can be aware of my masculinity as a role I play, like on a film set, I can choose not to play the role I’ve been assigned, or play it in ways diverging from the “straightforward” portrayal others expect.

Sounds good, right? The only problem is that this popular conception of the performarivity of gender misunderstands Butler’s point. She argues that our performance of gender is, at least in part, not up to us. We are assigned roles, often by violent means. But, since our social practices are “iterable” (a concept she gets from Derrida), they can be copied and recopied – or not. Since social roles are iterable, we can choose not to reproduce them, or reproduce them differently. Gender, then, becomes something like a “social” performance, in which one is assigned a role, but because these roles don’t go “all the way down,” one can find ways to either reject the performance of that role or reinterpret it. The big point missed is that the performance is not (entirely) something an individual chooses, but is rather partially a product of social forces.

I give this example because I have a feeling that All Things Shining will (have) be(en) misread in a similar way. Dreyfus and Kelly are basically recapitulating Martin Heidegger’s argument for the possibility of postmodern living beyond what he calls late-modern “Enframing” (Gestell). Enframing is a kind of ontotheology (a way of understanding the world both “from the ground up” and “from the top down”) that assigns to everything, including humans ourselves, the status of Bestand, or “standing reserve.” Things are, within this ontotheological framework, fundamentally resources to be exploited. There is no longer any qualitative difference between one thing and another. The question “should we do this?” is now trumped by “can we do this?” And, if the answer to the latter is yes, then there is no a priori reason not to do it, whatever it is. Dreyfus and Kelly don’t mention Heidegger specifically more than a handful of times, but their argument is essentially a kind of popularization of his thinking. They do this by tracing the origins of the nihilism that makes enframing possible through several canonical texts of the Western literary tradition. I haven’t gotten to the end yet, but I have a feeling that their gesture toward a renewed meaning of the world against nihilism will have to do with the late Heidegger’s “fourfold” (Geviert).

I don’t want to jump the gun here as I’m still reading the book, but at least one value I can see from this book is a pretty clear object lesson and articulation of what Heidegger thinks “art” (which includes literature) means and does. They also offer some interesting readings of canonical texts in the Western tradition from the Odyssey to David Foster Wallace. Even if the reader doesn’t buy the whole argument, these readings are worth a persual.

I know Dreyfus was an expert on Heidegger, but I don’t know anything about Kelly except that he’s at Harvard. The general idea they’re presenting is familiar to me since I’m familiar with the later Heidegger, so I’m trying to read as a “general reader” and evaluate the book on those terms. We’ll see if that’s possible. No timeline, of course, on when I’ll get those essays up. Being one’s own editor isn’t always bad.


Other than reading, I’ve mostly been worrying about final papers, the holidays, etc. You’d think that after being in grad school for nearly a decade I’d have figured it out by now and could preempt end-of-semester nerves, but you would, of course, be wrong. Nothing better than perseverating, I always say. Except maybe procrastinating by cleaning the house.

This semester has presented a strange series of challenges. On top of big-deal Life Events like buying a house (in a new city), looking for and finding a job and graduate funding, and failing to recognize people from last semester’s Zoom classes when I see them in person, it’s been difficult for me to be around other people in public. I’ll confess that I rather liked not having to go anywhere last semester, even if the reason why was bad. UNM instituted a mandatory vaccination policy for all students, so I’m not really worried about Covid (on campus, anyway), but just being around groups of people is increasingly difficult for me. It’s always been exhausting for me to spend time with large groups, but I feel much more sensitive to the exhaustion these days. Maybe I just need to be patient and my old tolerance will return, such as it was.

Another significant change I’ve made is that I’ve stopped setting myself a reading goal each year. I’ve been keeping track of the books I read since March of 2017, but last year I set myself a goal of 50 books for 2020. My final total was 68, but it occurred to me earlier this semester that I was treating this pursuit like a game, just trying to beat a high score. I doubt I was paying close attention to many of the books I read last year, even if I did in fact read them, and that’s not what I’m after. I’ll probably write more extensively about the experience of keeping track of my reading later, maybe in the new year. Of course, The Editor hasn’t set me a deadline, so we’ll see.

Thus concludes this inaugural Report from the Workshop.