I know when I have visited a beautiful place because I convince myself, almost without reasoning it out, that this place has made itself essential to my life.
A beautiful place exists in time. One among many magic tricks that it performs is to break out of the constructed specifics of its appearance. Even beautiful cityscapes, with a view that changes every day, become monumental, geologic, in stature. Disasters in the city derive some of their visual power from buildings that take on the qualities of mountains, crumbling.
The beautiful place is essential because it reconfigures life, points it in a different direction. This does not mean I know where I’m now headed (usually not). Still I feel that, no matter how accidental my arrival, I couldn’t have done otherwise than be here. I will do it every chance I have.
But firm hindsight crumbles. It’s all too easy to turn away from the next opportunity: I’m too busy, I’ve seen that before, I know what it’s like. When I arrive again, I have the thought: beautiful places are as necessary as eating or drinking. This necessity has a different pace. Like water for a plant, it can seem indifferent to being ignored from one day to the next. But to go without is to let something die, to be newly vulnerable. Other dangers rise up, the real cause will never be traced back because the language and concepts for the loss have themselves been lost.
But that the present order of things was not to be taken for granted, that it presupposed a certain harmony between the world and the guardians of culture, that this harmony could always be disrupted, and that world history taken as a whole by no means furthered what was desirable, rational, and beautiful in the life of man, but at best only occasionally tolerated it as an exception—all this they did not realize.
Hermann Hesse, The Glass Bead Game
Yascha Mounk had the political philospher Michael Walzer on his Good Fight podcast a few days ago, and they had an exchange about the rise of the so-called “post-liberal” political thinkers. The full version is too long to quote here, but a few highlights:
According to people like Patrick Deneen, liberalism is responsible for everything that has gone on in the modern world. And what is most amazing about his work is all the factors that he omits in his description of the rise of modernity, like the Protestant Reformation, which is perhaps the truest source of the individual and individualism—the individual and his God. The Protestants invented that singular pronoun. The gathered congregation, the critique of hierarchy—all that comes from the religious side, not from secular liberal ideology. And Deneen just doesn’t talk about it. One crucial aspect of individualism (which already also begins in primitive forms among the Protestant radicals) is the equality of women. Genuine equality of women, the end of the patriarchal regime, is going to change the way families live and the way familial life is organized. And they continually invoke the traditional family which has been destroyed by liberalism, and they are not prepared to say that women are not equals, they’re not prepared to say that.
…When you look in a little bit more detail it is absolutely unclear what that new society would look like. One of the things that strikes me is that a lot of the post-liberals are either Catholics, or Catholic converts, and they seem to think that this would be a majoritarian society in which the elect few, or perhaps the democratic many, impose their religious values on the rest of society in the name of the higher good. But it’s an irksome fact that virtually all of the societies in which they operate have become very secular, and Catholics, in particular, are a minority in the United States. And so it’s very, very hard to actually make heads or tails of what it is that this post-liberal society would look like. This still does not appear to be an obvious competitor ideology, and the travails of the post-liberals in making up a competitor ideology seems only to underline that point.
Even if you are not a post-liberal, narratives of decline are a major force. We live within a minefield of hypothetical declines–cultural, theological, economic, political, environmental–and they are usually related. The type who embraces one declensionist explanation is more open to others. It’s a pessimism with a cross-partisan appeal, even if disagreement over what to do about it fractures any consensus about the decline itself.
But I have found myself thinking lately that while some theories of decline might have historical merit, most post-liberals have the ethical import of the decline backwards. Perhaps the declensionists are in the grip of the most essential Enlightenment idea: that the world could be anything other than disordered, bleak, knocked off its marginal high points. Bad things happen, and keep happening. What we are dealing with is not a decline but a baseline. From this vantage, restoration from the decline looks something more like a fantasy, wanting to return to a set of unattainable circumstances. This is not a fatalism–the point isn’t to step back and do nothing–but an approach to the future without a sense of revenge, and without a bitterness at having lost something that could have been. Good work (whatever that work is) can still happen, if it accepts that it will coexist with rough and fragile circumstances–just like any progress that results.
Alexander Etkind, historian of Russia and an expert on the trade in natural resources, discusses the privatization of agriculture after the 1991 breakup the Soviet Union:
Members of the Soviet collective farms had used (but did not own) micro-slots of land, mostly vegetable gardens. After 1991, millions of peasants and dacha owners privatized their small households and gardens. In 1999, a quarter of the Russian population owned a subsidiary plot and was cultivating it. They worked 7 percent of the country’s arable land but produced more than 40 percent of its agricultural output. Amazingly, they provided 92 percent of Russia’s potato harvest, three quarters of its vegetables, almost all of its fruit, and half of its milk and meat. In 2009, the numbers were similar. This was an intensive but premodern agribusiness: whole families worked with shovels on miniscule plots, while elderly women sat on the side of the road, selling herbs by the gram or potatoes by the kilo. But these people were free: the only levy they paid was property tax; they chose their seeds, tools and methods; they owned their land and could sell it whenever they so desired. Russian agriculture had the same two-tier structure as other sectors: one part of the system, populous but mostly poor, fed the ordinary folk with perishable produce that could not be exported; another part, small but wealthy, produced the staples at volume, selling them abroad for convertible cash.
–Russia Against Modernity (2023), “Parasitic Governance”
This is amazing. Post-Soviet privatization of the economy in Russia usually meant privatization in the hands of a few, so that the resources could be sold on an international market for international cash, which went to the international bank accounts of those same few owners. As a result, the fruit, vegetable and meat consumption of around 150 million people was treated as an afterthought by the domestic authorities. And so a nation’s grocery store worth of fruits, vegetables and meat was effectively provided by a bunch of subsistence and hobby farmers.
Jacques Ellul on pre-modern European attitudes toward technical progress and the improvement of practical tools:
The deficiency of the tool was to be compensated for by the skill of the worker. Professional know-how, the expert eye were what counted: man’s talents could make his crude tools yield the maximum efficiency. This was a kind of technique, but it had none of the characteristics of instrumental technique. Everything varied from man to man according to his gifts, whereas technique in the modern sense seeks to eliminate such variability. It is understandable that technique in itself played a very feeble role. Everything was done by men who employed the most rudimentary means. The search for the “finished,” for perfection in use, for ingenuity of application, took the place of a search for new tools which would have permitted men to simplify their work, but also would have involved giving up the pursuit of real skill.
Here we have two antithetical orders of inquiry. When there is an abundance of instruments that answer all needs, it is impossible for one man to have a perfect knowledge of each or the skill to use each. This knowledge would be useless in any case; the perfection of the instrument is what is required, and not the perfection of the human being. But, until the eighteenth century, all societies were primarily oriented toward improvement in the use of tools and were little concerned with the tools themselves. No clean-cut division can be made between the two orientations. Human skill, having attained a certain degree of perfection in practice, necessarily entails improvement of the tool itself. The question is one of transcending the stage of total utilization of the tool by improving it. There is, therefore, no doubt that the two phenomena do interpenetrate. But traditionally the accent was on the human being who used the tool and not on the tool he used.
–Jacques Ellul, The Technological Society, “Technique in Civilization”
It is a fairly common idea among those who study the origins of life that the regular pulsations of nature as we know it on earth —the alternation of the seasons, of day and night, the waxing and waning of the moon and the rising and lowering of the tide— may have provided the crucial impetus to abiogenesis. Even if the chemical compounds necessary for life may be found floating, say, in an interstellar cloud, the absence of cyclical alternations in that environment would likely guarantee that no more complex organic system should ever evolve. We need the pulsations, the gentle rocking, that the circadian, the lunar, and the seasonal cycles provide.
The whole neigborhood was in an uproar, setting off firecrackers. I lighted sparklers and pinwheels for the children, liked to see in their eyes the fearful wonder that I had seen as a child. Lila persuaded Melina to light the fuse of a Bengal light with her: the jet of flame sprayed with a colorful crackle. They shouted with joy and hugged each other. Rino, Stefano, Pasquale, Enzo, Antonio transported cases and boxes and cartons of explosives, proud of all those supplies they had managed to accumulate. Alfonso also helped, but he did it wearily, reacting to his brother’s pressure with gestures of annoyance. He seemed intimidated by Rino, who was truly frenzied, pushing him rudely, grabbing things away from him, treating him like a child. So finally, rather than get angry, Alfonso withdrew, mingling less and less with the others. Meanwhile the matches flared as the adults lighted cigarettes for each other and cupped hands, speaking seriously and cordially. If there should be a civil war, I thought, like the one between Romulus and Remus, between Marius and Cilla, between Caesar and Pompey, they will have these same faces, these same looks, these same poses.
–from Elena Ferrante’s My Brilliant Friend
We are living, supposedly, in a boom time for narrative, or for the recognition of the role of narrative in human affairs. For the last few years, since the “power of narrative” became a refrain picked up in mainstream U.S. culture, I found myself asking what the alternatives might be, and how we might conceive of the imagination, in a mode other than that of storytelling.1
For one, I think we can oppose a narrative form to that of a mythical or cyclical presentation. Perhaps narrative becomes prominent in times that think of themselves as particularly novel or unprecedented. Narrative, after all, is constructed by means of a progression of events, a distinction made between a beginning and an end.
A narrative can draw on something recurring, something like a myth, of which a story is just latest instantiation.
Today we also see cyclical accounts broken down into narrative. Weather becomes climate becomes climate change. Over a long enough time, you find the beginning and end of temporality itself. The James Webb telescope attempts to look back into the past, not to better understand the regular cycles of celestial phenomena in the present, but to discern the governing narrative that created these–temporary–regularities. Narrative can be generative; even life, defined by the ability to reproduce itself, had a start. But in a time of narrative, instability rules.
There is still a lot of disagreement over how, exactly, photography was received by artists when it first arose as an invention. Along with the general public, many artists took notice when the first daguerreotype appeared in the late 1830s. But they disagreed about what bearing, if any, the technology ought to have on art. I want to consider for a moment the version of this argument that says photography created an existential crisis for art and artists: that when photography emerged, many artists understood their work in primarily representational terms. Furthermore, these same artists saw in photography a supreme representational accomplishment, a challenge to the worth of their work that was all the more grave because it could be achieved with minimal skill by the “artist” (photographer). Then, so the argument goes, art started to move down the road toward modernism, which was essentially a set of post-representational innovations that distinguished the purpose of avant-garde art from photography.
I wonder how an analogous story might play out again with writing, knowledge work, and the recognition of chatbots. Our own moment leads me to reflect back on the situation with art and photography almost 200 years ago now, and makes me think that maybe it wasn’t so much the artists who perceived a threat to their work, as it was the public that (re)interpreted art in terms of photography. If large numbers of people see the artist’s work as essentially about representation, about reproduction of reality or things “as they really are”–then in some sense it doesn’t matter what the artist thinks he or she is doing. You can decide that large numbers of people misunderstand your work and continue on doing the same things, but you can also lose your audience in the process. Even if the reasons for the change are hard to discern, it seems that art underwent a paradigm shift from within a world that could be photographed.
In the same way that artists pursued a multitude of ends at the time photography arrived, there will never be any kind of agreement on what writing is or is “doing.” Still, regardless of what writers think they are doing, automated methods will find a way to produce a refracted copy of it–at least some of it. But automation like a chatbot has a very different way of presenting what it does to the public. For example, automated writing “responds” to a “prompt,” it “completes tasks,” or “answers questions.” In the same way that much of the art world was collapsed into the self-presentation of photography, writing risks losing some of the rich account of itself when it is presented with an apparent copy by machine.1
Chatbots push us a little further into a model where writers have goals, where they have “information they want to communicate” (where is the information if not in the writing?). I wonder if we might see something like a photographic reckoning for the writing world today, where the apparent similarity with automated methods leads to a profusion of new genres and self-justifications for, say, literary writing. And could an analogous re-evaluation occur in more utilitarian writing forms (e.g., the professional memo, advertising) as well? Might all forms of signifcant writing need to situate themselves on new conceptual footing, to account for the investment of human time and energy in the shadow of the machine?
Art, too, faces another version of this with chatbot-likes tools: if suggestive new artwork can be generated with a simple concept typed out in a prompt, does this further threaten the representational justification for art?
The Dress was divisive, in the purest sense, dividing (according to a BuzzFeed poll with nearly four million votes) the two thirds of people who saw white and gold from the third who saw blue and black. Facebook’s engineers had been perfecting its engagement metrics…[A]nd the Dress was universal—a form of media that didn’t even require literacy to land. It didn’t spread, like most memes, along a rising viral curve, passed hand to hand. It spread, instead, algorithmically, as Facebook showed the Dress to users whose friends had not yet shared it, confidently predicting that they would find it just as engaging. Within a couple of hours, our traffic rose to seven hundred thousand people simultaneously, seven times our usual peaks. That sent our engineers scrambling to add servers to BuzzFeed’s back end; it was a number not reached before or since by a BuzzFeed post on the web.
That does seem like a moment to remember: when a medium designed to transmit streams of text transcends itself, delivering something “universal—a form of media that didn’t even require literacy to land.”
The novelist Haruki Murakami, on how he demands regular productivity from himself when working on a new piece of long fiction:
That’s not how an artist should go about his art, some may say. It sounds more like working in a factory. And I concur—that’s not how artists work. But why must a novelist be an artist? Who made that rule? No one, right? So why not write in whatever way is most natural to you? Moreover, refusing to think of oneself as an artist removes a lot of pressure. More than being artists, novelists should think of themselves as “free”—“free” meaning that we are able to do what we like, when we like, in a way we like without worrying about how the world sees us. This is far better than wearing the stiff and formal robes of the artist.1
I find something satisfying about a novelist refusing to call himself an artist. And there is a long tradition of writers de-emphasizing their artistry, likely stretching back to before the novel was a major, reputable genre of writing.
Murakami suggests that artistic production might not be a good descriptor of his activity; perhaps many “artists” no longer see themselves in the image. Perhaps what it means to be an artist has become too specific, and it is easier to discard the label. A writer like Murakami could prefer to write under simpler–if less legible–terms. A lot of art today struggles under a romantic burden; to be an artist is to resist the functional and purpose-driven framework of ordinary life. Murakami’s self-definition–that a novelist is someone who is more free, who takes his or her freedom seriously, who makes use of it–also shares this resistance to a reductive functionalism, even as his artistry accepts a regularity more typical of the factory.
It is worth thinking more about why being an artist has lost some of its attraction–especially to those who devote their lives to making art.
From “Making Time Your Ally: On Writing A Novel.” In Novelist as a Vocation, Haruki Murakami, 2023
The late spring dispersal of cottonwood seeds happened throughout last week–and into this one. I couldn’t find any ready information about variance in seed volume by year, but the amount of seeds in the air seemed greater this year.
A low-hanging catkin in a nearby park with closeups of seeds, mid-release:
Seedpiles could be found everywhere, piling up so high and thick they were like snow on the grass
In less cultivated environments, the eastern cottonwood (Populus deltoides) is more of a niche species, occurring near rivers and water sources, where the seeds need constant moisture to germinate and grow. These seeds are notably short-lived (the US Department of Agriculture’s Woody Seed Manual reports they stop being viable in as little as two weeks). Unlike some seeds, which can remain dormant for a long time until conditions improve, the cottonwood appears to be a prolific producer of low-odds seeds that travel far. Most will waft off course and die off right away, but the hope is that a few float far enough to hit the right habitat–and take off.
The city of Chicago probably likes to plant them in cultivation because they are fast growers (some sources say one of the fastest, 6 or more feet a year), reaching a mature height in 10-15 years.
Because of a few favorable qualities, this tree with a picky survival strategy gets to live everywhere in the city.