By accident, walking at lunchtime yesterday, I walked far enough south from my office to come upon this new rivewalk path, the “Southbank Riverwalk,” that follows the South Branch of the Chicago river for a few hundred yards. When I walked it, in the middle of the day on a weekday, it was almost as empty as an architectural concept drawing.
Cities spend a long time trying to push nature away, to turn the earth into abstract space fit for development. Few cities make this more obvious than Chicago, the most grid-like metropolitan area in the world. But when a city reaches a certain point in its development, the effort reverses; it becomes pleasing to bring nature back in.
The boardwalk lets its traveller float across nature; the walker gets close to the trees, bushes and ground, without scraping up against the surfaces of plant or earth. The boardwalk is the long, straight line that maintains the geometric, regular the built world.
Indeed the nature walk in the city might be at its best when it uses natural elements, like trees, to frame all the familiar monuments.
I was sitting at my computer this evening googling about a half-formed question, something like “how much of the current U.S. and world economy is made up of goods versus services, how is a ‘good’ defined, and is there any sign that the U.S. service economy is losing ground in the post-Covid era?”–when it occurred to me (or rather, occured to me all over again) that all of the sources I found online were not very good. Now, if I brought a little prior knowledge and intentional effort the question, if I searched for respectable public institutions–like the Fed–that put their data online, I would surely find a start to these questions. But these are things that one would have to know. For the average person, you want to know the answer to something non-commercial, you just start typing questions as they occur to you, and you will probably give up clueless because online search these days is remarkably bad. It’s not that all of my questions had been targeted by low-quality content farm sites, but rather that a lot of the more mainstream sites that came up first–like a link to a LinkedIn post, or a Forbes article, or a Harvard Business Review blurb– were all generalist filler. And on down for several pages, with the occasional news article from a few years ago or a general Wikipedia topic (”Service economy”) thrown in. Search is not very good in large part because the sites that count as “average” are mediocre at best.
And don’t get me started on bots like ChatGPT. Yes, you ask a question like this of a bot and you get a coherent answer back, but whence this answer from the void? Maybe some services like Bing will give you a few citations attached to the answer, but guess what? Those citations are just sourced from the same search services I was complaining about above. If the two services that currently define the information landscape are search (Google et al) and chat (ChatGPT etc), then we are choosing between a graveyard of irrelevant “content” and a polished but low-context book report.
Even as more life is spent online, the online world gets thinner and thinner. More often, when I want to know something, I find myself confronting a situation that had nearly slipped from my memory: how would I figure this out if I wasn’t online? Who would I ask, and how would I go about asking it? What identifiable source would I need to read? To me, the idea that a little more space might be opening up behind the screen is an exciting thought. But I do worry that if the internet completely falls apart as an information ecosystem, there will be nothing left to backstop it anymore. What would a revitalized world of information look like, without that now-old idea of the “world wide web?”
I’ve been reading Jennifer Habel’s and Chris Bachelder’s book Dayswork. Actually, dipping into it, then falling away; losing interest for a while. then coming back. The episodic approach to reading works quite well for a book, written during the Covid pandemic, in an aphoristic format. Many of its passages could be tweets. The book has the feel of something written in a makeshift desk–maybe from a closet–when the writer is supposed to be doing something else (I don’t know, exactly, what the writing process was for Dayswork). But it also reads like a product of the distracted modern condition of reading. Judging by how active even many serious writers have been on X/Twitter over the past decade, I suspect that distraction is also the predominant condition of writing today.1
The waves of “Melville revival” that brought him into the American canon have always had an obsessive devotion to the historical Melville; the quotidian, real person: adventurous, flawed, idiosyncratic. Dayswork contributes to the cult of the author. While the book does use Melville’s literary work as an anchor, it spends just as much time pecking at the minutia of the author’s life. The book spends a lot of time introspecting about other figures connected to Melville, some of them people he knew (his wife Lizze Shaw, daughters Elizabeth and Frances) and others later interpreters or admirers, like Elizabeth Hardiwick. One of the most frequently mentioned figures, “The Biographer,” is still commenting on Melville as of early 2024. The Biographer remains unnamed until the book’s end. He is Herschel Parker, a retired English professor and Melville scholar from the University of Delaware. Author of not just a Melville biography, but of a Melville meta-biography. And, most relevant to Dayswork, he also maintains an active blog in which–guess who?–Melville comes up a lot.
As a character, Parker does not come off well in the book. After it was published he responded with obvious annoyance. Dayswork is above all a book of personalities, and I have a few thoughts about its relationship to personas like Parker. Are its antagonisms really any different than authors in the pre-internet era, inserting gossip about contemporaries into their books? Writers have included one another in fictionalized form, walking all the way up to libel and beyond, since before mass printing began. But there is a sense of detachment in how the authors speak about Parker, as if what they say about him is not so much directed at him–as with a debate or conversation–as it is whispered about him. Take this episode in Chapter 6
On the morning of the wedding Melville took a walk on the Common.
Or, Herman sallied out early in the forenoon for his last vagabondizing as an unmarried man,” in the words of the Biographer.
Whose blog entry for today, I see, reports a frustrating transaction with Netflix:
He ordered the BBC’s Cymbeline starring Helen Mirren, but instead received a “hyper-violent” version from 2015 featuring dirty cops and a biker gang.
“Sealed it up and sent it back.”
Which must mean, my husband pointed out, that the Biographer still has a DVD subscription to Netflix.
Not wanting to pay to access the movie through Amazon Prime, he ordered a copy on eBay, asking the seller to make sure it wasn’t the violent biker version.
For days, according to his blog, the Biographer has been yearning to listen to the Act V recognition scene in the BBC version of Cymbeline.
Earlier this year he wrote that while doing exercises in the middle of the night he’d been listening to film adaptations of Shakespeare, including some other version of Cymbeline—
“Nothing more consoling than Act 5 over and over.”
Let this be an example to anyone who posts the trivial ups-and-downs of everyday life to the internet–or a blog :). Parker is someone who has elected to put himself on display. One difference between an old-school blog like Parker’s and modern social media is that the following on a blog is harder to see. From the inside of a blog, there is always a little bit of a sense of talking to oneself. From the outside–when you comb through the archives of someone’s thoughts, especially the old ones–there is always a little bit of a voyeuristic quality, like looking at someone’s private papers or files.
But voyeurism has not gone away with modern social media, which has–if nothing else–lowered the bar for two-way participation on the internet. Still, to be online is to be hit with far more “content” than one’s capacity to produce it. This makes “lurking,” a term that refers to passive reading of old-school internet message boards, into the default online condition. When reading Dayswork, it is hard to get past the sense that the authors are very online, lurking around their subject(s). I don’t even know if they would dispute this claim. Maybe it is because of the pandemic, which made both acquaintances and strangers feel far away for a while, that the book feels like it is gossiping about all of its subjects–even Melville. In Dayswork, like the pandemic, being online is a condition that is endured. The short-form writing–the distracted writing–that thrives on the contemporary web is well-suited to this gossip. Even if they are writing about a master of American long prose, one of Dayswork’s accomplishments is to bring a tweet-sized version of Melville into view–a Melville that is both viable to and relevant within the distraction economy.
Contemporary writers are, after all, both encouraged or tempted to be online all the time.
↩
A useful but somewhat unsatisfying definition of “information” is that it is anything that reduces uncertainty.
For some time I have found myself thinking about the conditions under which the internet–I”ll define it here as a worldwide information-sharing network–might wither away substantially, or even disappear from recognition.
Those thoughts have only accelerated for me as it appears that the internet, in its contemporary form, is becoming an ever-more parasitic on itself. ChatGPT, which was likely produced through large-scale bulk collection of as much of the internet as possible, is only the latest version of this trend. There is more incentive than ever to capture information on both the intake side–through super-dominant platforms that host the great majority of the world’s new information that enters the internet each day–and on the archival and retrieval side–where ever-more information is “read” by bots and metadata collection agencies. On the 2024 internet, web activity by bots and automated tools is almost evenly split with the traffic generated by actual humans.
Yes, this network of interconnected smaller networks known as the internet is likely to be kept around as long as possible, since it is has a lot of uses (many of them lucrative) to so many. This is the infrastructure internet, the network that connects things for its own sake, because it is always potentially useful to be able to send a message to a faraway place.
By objective measures the internet is still growing at a considerable year-over-year pace. But is the amount of information on the internet still growing?
The internet took on a new life when it became a series of interconnected documents. When I write “document” here, I don’t just mean text in any specific format (although a lot of early internet documents were in fact plain text). Instead I mean document in the most abstract sense: a unit of information. Information rarely stands alone; it is always based on prior efforts to know or establish something, if only implicitly. Therefore any document owes its existence to others which came before it. The internet can be thought of as an attempt to make as many documents–information–visible and “on the map,” to make the relationship between information units explicit–and to foster the creation of new connections that would not have been otherwise created.
The typical internet growth chart begins in 1990, near the time when Tim Berners-Lee’s research group implemented a version of hypertext for linking documents into a single network. In a 1999 reflection on the fast-maturing internet, Berners-Lee recounts the type of general relationship that he wanted to create with hypertext:
So long as I didn’t introduce some central link database everything would scale nicely. There would be no special nodes, no special links. Any node would be able to link to any other node. This would give the system the flexibility that was needed, and be key to a universal system. The abstract document space it implied could contain every single item of information accessible over networks–and all the structure and linkages between them.
Hypertext would be most powerful if it could conceivably point to absolutely anything. Every node, document–whatever it was called–would be fundamentally equivalent in some way. Each would have an address by which it could be referenced. They would all exist together in the same space–the information space.1
Berners-Lee strove for a decentralized document network: everything could be linked to everything else only because there was no priority between the units. The intention to decentralize the network points to a curious feature of this model: it is a network with infinite extent (links can always be added) and no depth (documents cannot establish priority over one another, only connection). By making it easy to establish links between documents, the modern public internet became widespread at the expense of establishing its authority. To give an example, if you want to know something obscure about the city of Cleveland, Ohio, then the internet is usually the the first and primary (most common) method, and the most expedient one (because it is fast and never closed)–but only rarely does it have the final answer on any issue.2 And yes, this goes for Wikipedia , too.
The informational internet started to come under strain at soon as it began to replace the authorities upon which it implicitly relied. Electronic communication has replaced authoritative knowledge with knowledge that is merely “the fastest” and the “most expedient,” and this in turn has replaced information with “information about information” (metadata). What are the signals on a social media platform–the “like,” the “view,” engagement, etc–other than a way to turn metadata into a public, gamified social signal? Information itself becomes “content,” which is really just a way of valuing the container over the thing itself.
There is no guarantee over the long run that a worldwide public network continues to draw any trust or interest. It is quite possible that there continues to be a network through which Bank A can send a request for funds transfer from Bank B, but that there is nothing of use on that network to the public at large. Most or all of the indicators of activity on the internet today–number of links, visits or reactions–have no connection to its status or value as information. What I wonder is if and how long this continues.
It is possible that the internet settles into a status of quasi-stable dystopia, washed over by regular waves of distracting and entertaining sideshow–maybe this can just continue forever. But it is also possible that the whole thing falls apart over time. This would be the more hopeful outcome, but it would require a painful breaking point: paranoia grows and trust wanes so badly that it becomes clear the only sane choice is abandonment, as deflated and boring as a reality without worldwide connection might look, at first. And what comes after? Who can say, but I doubt it would be a return to paperbook books and snail mail. Maybe a set of more manageable, deliberately regional networks take the place of the worldwide web; maybe what is now called information becomes so rare that a new value attaches to it, and it begins to grow back.
Tim Berners-Lee, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by its Inventor↩
To the extent that information on the internet carries authority, it is usually through means outside the internet, like institutional standing, or legal force.
↩
The institutional situation is that a lot of these subjects still draw interest from undergraduate students, especially in their first year(s), before they have to pick a major. But fewer students choose to stick with the humanities: the most recent long-term report I could find said 25 percent fewer from 2012 to 2020, although there may have been a slight swerve upward since then. The overall trends are extremely worrying for the survival of many humanistic disciplines across the entire American university system.
The theories about the cause of the decline are everywhere, so prominent and repetitive that most are not even interesting to summarize. Everyone working on the inside of these departments has to decide for him or herself why the humanities are declining.
A few thoughts:
When the argument is about the societal importance of the humanities, there may just be a mismatch between what humanistic culture contributes to collective life (a lot, I think), and what is in the short-term advantage of any single student to study and pursue. That is, there may not be enough good cases for “risking” one’s own future to study humanities, even if everyone–including those who don’t study the humanities–are better off if there is a critical mass of people who do.
It could also be that the humanities are as much effect as they are a cause of a healthy society. That is, the humanities don’t make people or societies good, they follow when these things already are healthy and “good.” When people enjoy some stability, confidence in themselves, and sense of future continuity–it is at this point that many people choose to engage with ultimate, open-ended questions in literature, philosophy, art, etc. Or, when a culture becomes troubled, these subjects are still practiced, but they move out of institutions. This could be because the institutions contribute to the underlying problem, or because institutions like the university no longer understand open-ended inquiry as worth pursuing. Both seem to be occuring in our own time.
In places where the humanities are doing well and at the center of what a college does, the setting is often religious, or otherwise not invested in the critical humanities. This means places like Hillsdale College, where “Western values” and the “Western tradition” make up a fixed curriculum attached to a confident moral and political project. And usually, it’s a project with a built-in constituency. For the forseeable future, there will be a huge cultural gulf between the faculty at these schools and secular American humanities departments–to the point that people on either side will not recognize one another as a legitimate version of the humanities. I am not religious, and yet I wish there was more exploration of how the humanities didn’t have to be the critical humanities. Humanistic study appears indefinitely stuck in cul-de-sac of critical detachment: many mainstream academics recognize the problem. But it seems to me that there is a different-in-kind problem that presents itself here: if you’re doing critical work and you want to stop, it’s very hard to do that without abandoning academia entirely. To my knowledge, there are a few senior people with tenure who, say, write novels instead of criticism, but there is no way to even propose that within the formative stage of one’s career. I would love to hear counterexamples. Maybe the way out of the critical trap is to trade in some humanities departments for more art schools.
Finally, I worry that the humanities looks too much like a closed book today, that the humanities are still too focused on “the tradition,” antiquarianism, and old things in general. This is obviously not true of all humanistic work, including the humanistic knowledge that is most implicated in the American culture wars today. But for the humanities in their present endangered state, the real struggle is to get students to take the classes and read the books at all. In other words, it’s hard to persuade students even to be consumers of humanistic knowledge. And so it would be beyond the pale, almost unthinkable, to propose that more students produce humanistic work. But as much as we need more people who have a deep sense of history, of the strangeness of other historical moments, I worry that the humanities start at a disadvantage when they are presented mostly in terms of the past. There needs to be a more expansive vision of what it means to produce humanistic work today, such that more students can see themselves in that work–regardless of their major or what they go on to do for work–and the humanities looks more like a living, ongoing, future-oriented project.
I don’t use rideshare apps that often these days. Over the break I used the Uber app for the first time in a while. Little things had changed here and there in the UI–as they usually do with web tech–but I was surprised to see that they now offer a setting for “conversational level.” That is, you can set in advance how much your driver is supposed to talk to you. But conversation is not actually a function of the app that can be dialed up and down. It’s a thing your driver does, a service (or disservice) that for the moment, can still only be performed by the driver. You are not actually setting anything, just registering a preference that will be communicated to the driver along with your other ride information.
I don’t know why this bothered me, or even made me think. Maybe I don’t use enough person-to-person apps. Let’s be honest, for any app in the gig economy, the entirety of the software platform is really a way of turning a person (“gig worker”) into a set of menus and toggle switches (“grab [X] food at [X] and bring it to [X] by [X]“).
The NYTimes columnist Farhad Manjoo wrote something a few years ago about that US college admissions bribery scandal that stuck with me and seems apt here: people with enough money to be the buyers in the gig economy have become “socialized to easing every hurdle through an app.” He was talking about money (Manjoo: “who should I Venmo to fix this thing?”) but another consequence of an endless landscape of software-mediated transactions is that both parties are now obligated to relate to one another like software. As I reflect on it, I think what actually bothered me about the Uber app was just how small and incremental this “setting” is. How many more of these options will there be to tap, pulse, interrupt, and shake every imaginable extension of a person’s agency? And because the setting is basically a fake lever- there’s a real person on the other side of this software lever who still gets to choose whether to comply or not–you can program up an infinite number of them. They probably won’t have the effect you want, but it will have an effect, if only in aggregate.
I took this photo from Interstate 77, near Fancy Gap, Virginia, looking back southeast to where I’d come from. The mountains on the horizon are Pilot Mountain to the right, with its distinctive round knob, and Hanging Rock to the left.
I love the way the camera captures focus on the mountains while allowing foreground objects like the tree and the guardrail to blur. Here, like the human eye, the camera renders sharply what it cares about; detail reveals itself according to attention given, other objects become a sketch. The ridge on the left, in the photo’s middle ground, offers suspense by cutting in at a diagonal, revealing the height of the observer and threatening to close out the view. The sky, given substance by the cloud ceiling, makes a counterpoint to the textures of the ground, breaking only at the horizon to let in the colors that outline the mountains.
I also love the sense of space in this image, the way perspective and distance allows objects of dissimilar size to appear to be on the same scale. It is a lightly settled landscape. A town near the lower right can be made out, contained by the trees. The mountains are large, but still bounded, by the view. The landscape reveals the layout what would otherwise be too close, too “on top of me,” to see. A sense of recognitiion: “I was there, I am part of that–that only triggers when the observer is separated from the scene, and the scene tucks into the borders of a wider earth.
Three pictures that I wanted to post this fall, that I never got around to:
I don’t know why–I knew I liked them, and wanted to see them archived. Maybe I would find them the following season. But I also know that I liked these photos because they reminded me of an act of seeing, that the artifact stood in for how I related to something with my own eyes. The photos exist to point: to a moment of observational capacity, openness and fulfillment that is far less communicable.
I’ve been thinking again about what it means to be a naturalist; one answer I’ve arrived at is that a naturalist is someone who observes uncontrolled situations for their own sake. The qualifier uncontrolled does the work, for me, of a more traditional definition of nature: nature is not just that which is opposed to the human. I believe so strongly in this observational component, I am willing to bend quite a bit on my definition of nature. Streets are a fine place, as long as you look. The point is to look with such unrelenting commitment that your vision starts to get strange, to be OK with taking away (only, only!) the impression and go no further. To rest in what cannot be communicated.
This news in astronomy got a bit of attention in a few newspapers last week. The discovery was that a distant star system has six planets orbiting at different resonances, or rates of orbit, that are related to one another in precise ratios.
Imagine one planet orbits its star at twice the rate of another planet in the same system, a third planet that orbits four times as fast (these ratios are made up), and so on.
This arrangement is both beautiful to behold and mathematically harmonious. Current thinking suggests that these neat arrangments probably arose during the formation of the star system, while fusion gets underway, and dust and gas accumulate into planets. If these initial relationships still hold, it means we are looking at a system whose planetary bodies have not been disturbed over billions of years. The perfection of the system can be seen as a mechanical time capsule, a glimpse at the original creative force that first pushes stars into motion.
On a related note, I’ve been returning to Spinoza’s work recently because I’m going through this book. I thought of him when I read about this concordance of ideal motion and intellectual beauty. In it, I see a phenomenon that Spinoza would find particularly pleasing. In his Short Treatise, Spinoza writes about the two types of Natura naturata, or “those modes or creatures which immediately depend on, or have been created by God:” “motion in matter, and “intellect in the thinking thing.” On matter:
With regard particularly to motion, it belongs more properly to a treatise on natural science than here, [to show] that it has been from all eternity, and will remain to all eternity immutable, that it is infinite in its kind…
And the intellect:
As for intellect in the thinking thing, this too is a Son, product, or immediate creature of God, also created by him from all eternity, and remaining immutable to all eternity. Its sole property is to understand everything clearly and distinctly at all times.
Spinoza was writing at a greater level of generality here than that of particular planetary bodies in motion or the constructs of an embodied human mind, but I still think that he would, at least aesthetically, be struck by the harmony between astronomical motion and the constructs of the intellect. The situation offers a natural opening to the idea that matter and the intellect are in rational coordination with one another: that motion achieves its perfected realization in contemplative understanding, and the special status of the intellect is confirmed in the material embodiment of what it knows.
Given that I am not someone who specializes in this stuff, I am especially tired of thinking and writing about AI chatbots. But there are at least two thoughts in this area I’d like to see get more attention:
How the OpenAI’s nonprofit status contributed to the breakthroughs it made. Over the last few weeks, since the shake-up on the board, the company’s unusual legal structure– a nonprofit controlling a for-profit corporation–has mostly been the subject of ridicule. This is a reflection of how badly the current moment has been captured by a certain type of profit-motive narrative about creative breakthroughs–at least the capture of those who are in a position to do most of the reporting on OpenAI. The consensus I read is that OpenAI’s non-profit structure has been holding it back for a while, that it was an accidental property of its naive founders. I hope, with time, that the stories move past this prejudice, and some journalist or ethnographer gets enough access to study if and how the company’s unusual corporate structure contributed to what it did. Innovation–especially profitable innovation–will always be unpredictable, but shouldn’t a non-profit environment for technical innovation be taken more seriously? Was there a relaxed field here–maybe a different relationship to work, goals, and play–that nurtured the achievements that the for-profit partisans now want to take credit for?
All the ways in which ChatGPT reflects a a larger civilizational readiness, a cultural priming, to accept automated text generation. If bots like this really do maintain their status as breakthroughs once the hype has settled down, one of the more curious aspects of its origin story will be how long the basic technology was out in the open without any real mainstream reaction. This is true since at least 2020 from OpenAI, and Google reportedly had in-house chatbots with significant capabilities before that. Why did it take it so long to land, and why did it explode when it did? Is there a story here about post-pandemic mental exhaustion? Certainly there’s a story here about large numbers of people wanting to do–doing more of–the things that chatbots do well: sit for long periods of time in front of screens, sending chat bubbles back and forth, and write the things (e.g., code) that chatbots are trained to do well. I wonder, without the conditions that lead large numbers of educated people to sit inside in front of computers all day, if chatbots would seem so impressive. There’s also a backstory here about an algorithmic way of life, of which chatbots are just the latest, strangest chapter. Chatbots may be philosophical zombies that usurp human qualities in the body of a computer, but computers had to draw humans a little closer before that became possible.