Innovation, Nonprofits and Cultural Priming
Given that I am not someone who specializes in this stuff, I am especially tired of thinking and writing about AI chatbots. But there are at least two thoughts in this area I’d like to see get more attention:
How the OpenAI’s nonprofit status contributed to the breakthroughs it made. Over the last few weeks, since the shake-up on the board, the company’s unusual legal structure– a nonprofit controlling a for-profit corporation–has mostly been the subject of ridicule. This is a reflection of how badly the current moment has been captured by a certain type of profit-motive narrative about creative breakthroughs–at least the capture of those who are in a position to do most of the reporting on OpenAI. The consensus I read is that OpenAI’s non-profit structure has been holding it back for a while, that it was an accidental property of its naive founders. I hope, with time, that the stories move past this prejudice, and some journalist or ethnographer gets enough access to study if and how the company’s unusual corporate structure contributed to what it did. Innovation–especially profitable innovation–will always be unpredictable, but shouldn’t a non-profit environment for technical innovation be taken more seriously? Was there a relaxed field here–maybe a different relationship to work, goals, and play–that nurtured the achievements that the for-profit partisans now want to take credit for?
All the ways in which ChatGPT reflects a a larger civilizational readiness, a cultural priming, to accept automated text generation. If bots like this really do maintain their status as breakthroughs once the hype has settled down, one of the more curious aspects of its origin story will be how long the basic technology was out in the open without any real mainstream reaction. This is true since at least 2020 from OpenAI, and Google reportedly had in-house chatbots with significant capabilities before that. Why did it take it so long to land, and why did it explode when it did? Is there a story here about post-pandemic mental exhaustion? Certainly there’s a story here about large numbers of people wanting to do–doing more of–the things that chatbots do well: sit for long periods of time in front of screens, sending chat bubbles back and forth, and write the things (e.g., code) that chatbots are trained to do well. I wonder, without the conditions that lead large numbers of educated people to sit inside in front of computers all day, if chatbots would seem so impressive. There’s also a backstory here about an algorithmic way of life, of which chatbots are just the latest, strangest chapter. Chatbots may be philosophical zombies that usurp human qualities in the body of a computer, but computers had to draw humans a little closer before that became possible.