ReImagining Liberty Blog

A blog about the emancipatory and cosmopolitan case for radical social, political, and economic liberalism. By Aaron Ross Powell.

Blog

A Change of Name

If you happened to visit the website for this newsletter today, you might’ve noticed things look a little different. I’ve been considering this shift for a while and finally went through with it: My newsletter is now called ReImagining Liberty.

It’s not a big change, no. But it does decouple things a bit from specifically me, and so gives some space to expand the publication to other writers in the future—a project I’m thinking through. I also want to tie the essays I write here more closely to the podcast I host of the same name.

The other piece of news is there’s now a way for you to earn free early access to episodes of that podcast. Just tell your friends about this newsletter, and if they sign-up, you’ll get free premium access. (Free subscribers count. You don’t need to talk your friends into paying for anything.)

That’s all for now. Thank you again for being a subscriber.

The Worst Argument in Politics

Plenty of people have strong opinions about politics, but most of them are quite bad at making arguments for their preferred policies. Often this is an information problem. Public policy is frequently complicated or complex. Complicated in the sense that, even when we can predict the outcomes of policy interventions, understanding all the moving parts in order to make accurate predictions demands quite a lot of knowledge—knowledge non-experts lack. Complex in the sense that many policy interventions have so many interacting aspects, in so many unknown or novel situations, that accurate predictions are, frankly, impossible—even for experts.

In both cases, even if you’re predictions will turn out wrong, or you can’t make a full prediction in the first place, you can at the very least mount an argument for why your preferred policy might be good, or why other policies might be bad. What this means is that engaging in political debate, if you want to do it well, demands a great deal of knowledge. But it also demands an ability to construct an argument with and around that knowledge. It’s here, in my fifteen years of being a professional in the world of politics, that I see the most failure. In short, a great many people are just terrible at constructing political arguments, or in assessing the quality of the arguments they draw on to support their side’s preferred policies.

And if there’s a single, most common way people go wrong in constructing political arguments, it’s in not asking themselves a simple question: “Does the argument I just gave prove too much?”

I’ve been meaning to talk about this variety of bad argument for a while, and was finally prompted to do so now because I came across a perfect example of it on Threads recently. It’s about religion, not politics, but it’s a particularly clear version of what I have in mind, so let’s start there.

There’s a lot wrong with this argument, but the problem I want to highlight is this: If we take this argument seriously, then it tells us that anyone who asserts the non-existence of anything is “contradicting themselves” and has “no solid foundation” for their claim that whatever it is doesn’t exist. Yes, it’s true that it’s difficult to turn up evidence that God doesn’t exist. But it’s also difficult to turn up evidence that leprechauns don’t exist. This isn’t to say that disproving God is as simple as disproving leprechauns, but rather than what sounds at first blush like a plausible argument that atheists are irrational is, instead, an argument that commits the person making it to a whole lot of conclusions they don’t believe.

The mistake, then, is in not asking, “What else does the argument I’ve just given prove?”

Now, maybe your argument is solid, and, even though it points to conclusions you don’t like, you’re willing to bite those bullets in the name of consistency. Or, you need to add additional steps to show why your argument leads to its conclusion in this particular case, but not in others where it also seems to apply. But to do either, you first need to acknowledge this feature of your argument. Instead, in my experience, most people who advance arguments that have this feature aren’t even aware of it. (You can see a version of this in many of the bad arguments against AI.) The result is that argument that look solid to the person making them turn out to be rather easy to dismiss.

Here’s an example: It’s common for people who oppose private schools, or who support socialized medicine, to argue that both education and healthcare are extremely important and necessary for a quality life. Because of this feature, they must be provided by the government—and not just paid for by the government, because otherwise tax credits for private schools, or subsidies for private insurance would be fine. And this sounds plausible. Except that if you stop there, you’ll need to answer why that argument doesn’t apply to everything else that’s also extremely important and necessary for quality of life. Few people who oppose private education or support socialized medicine also argue that the government should manufacture and distribute all the food we eat or the clothing we wear, but life is pretty hard without decent food and clothing.

So if you stop with “It’s important, therefore government should support it,” you’ve made an argument that doesn’t hold up to much scrutiny, unless you’re also arguing that government should give us our food and clothing, too. Just like arguing that the impossibility of proving non-existence means you should believe in God doesn’t hold up to much scrutiny unless you’re willing to accept this is also why you should believe in leprechauns.

If you don’t want your argument to pretty much fail on its face, you need to instead recognize when it looks to prove too much, and then give reasons why, in fact, it doesn’t. That’s how you build a better argument.

The Liberal Identity Crisis

One of the culture war’s contemporary tributaries is the conflict between establishment, older, New York Times type liberals concerned that the cultural left liberal youth have, in fact, become illiberal, and the cultural left liberal youth who lash out at those New York Times types as, in fact, right-wingers unwilling to admit it. Both sides claim the liberal label, and both sides claim that label doesn’t apply to the other side.

The Overton Paradox

Data scientist Allen Downey has an idea that offers a pretty compelling way to understand this disagreement. He calls it the “Overton Paradox.” Conventional wisdom is that people become more conservative as they age. And this is true in terms of self-identification. Older people are more likely to say they are conservative than younger people. But if you give them a series of questions intended to tease out their actual beliefs, instead of what they choose to call themselves, it turns out become slightly less conservative as they get older.

The way to solve this paradox becomes clear when we graph those survey responses to questions getting at conservative beliefs.

Social “liberalism” isn’t a fixed standard, but a relative one. Roughly speaking, to be a liberal is to be more socially tolerant, inclusive, and dynamic than the current social baseline, and to be socially conservative is to be less so. This means that an accurate application of the label can change, even if a person’s underlying beliefs have not. Someone who was justifiably called a radical social liberal in 1924 might well be seen as an extreme social conservative in 2024.

This makes intuitive sense. The world has clearly become more socially tolerant and open over time. And the way that happens is by each new generation pushing things forward, which means it is both socially liberal during its day, but from the perspective of future generations, it’s rather behind.

So the way to solve the paradox (that, on the one hand, it seems like people become more conservative as they get older, but on the other other hand, they don’t actually become more conservative), is to understand that the very same spread of social beliefs that makes you a liberal when you’re in your 20s might well mark you out as now a conservative when you’re in your 50s.

“I’m a liberal.” “No, I’m a liberal.”

Where this gets interesting in the context of the contemporary culture war is when the “liberal” label matters a lot to people. If you’re generally indifferent to politics—when you’re a normal person, in other words—you’re not terribly attached to political terms. Saying, “Sure, I was a liberal back then, but now I feel more like a conservative” doesn’t bother you.

But if you’re politically engaged, and particularly if your politics is your personal brand, then those labels matter a lot. Not just in terms of how others define you, but also in how you define yourself. Being a “liberal” (or a “conservative”) isn’t a mere descriptor, but your self-image. This is compounded if that label is also your professional identity—if it is effectively how you earn your living.

In practice, then, if you’re an older, established liberal, who has been writing from a self-consciously liberal perspective for your whole career, you’re going to get upset when the young people start telling you that, no, some of your views aren’t liberal at all, but instead quite reactionary. And this is most likely to happen around those issues where the culture has moved the fastest, such as acceptance and normalizing of LGBTQ+ people, identities, and expression.

This gets us to those accusations of illiberalism. You think of yourself as a pretty liberal person—and think of being a liberal has central to your identity. But now the young people are mad at you because your preferences are, by their estimation, behind the times in terms of your views on transgender people, or the appropriateness of racially charged humor. You know you’re a liberal, but those kids are telling you that you’re not a liberal, because they’re the actual liberals, and you’re not like them. But you are a liberal, you’re quite confident of that, and so it must be the kids who aren’t liberals. They’re illiberal, and their views thus don’t represent society moving in a more liberal direction, but in instead towards illiberalism.

The conflict appears to intractable because, in a meaningful sense, both sides are right. Each is liberal by the standards of the time they came to understand that term. And so, for both, the other side is not liberal by those same standards. What it means to be liberal isn’t fixed, but is constantly evolving, and what is the youth vanguard will eventually become the accepted mainstream, and then, decades further down the road, that accepted mainstream will become reasonably labeled “conservatism.” That’s just how social progress happens.

The Hidden Violence in Our Laws

A simple fact about the nature of government is this: It’s violence.

I know we live in a democracy where we decide together what the laws should be, and with the aim of achieving a better life for all. “Government is simply the name we give to the things we choose to do together,” Rep. Barney Frank (supposedly) once said. And we get angry when the state conducts itself violently, like when an NYPD police officer murdered Eric Garner.

But hear me out. Yes, most laws are created with the intent of improving the state of things. And, yes, police brutality isn’t a necessary feature of a well-run government. Yet the reason Daniel Pantaleo choked the life out of Garner is because Garner was selling unlicensed cigarettes. At some point, well-meaning people decided that, if we’re going to allow the sale of cigarettes, only some people will be allowed to sell them, and Eric Garner wasn’t one of those.

That law—“You must not sell cigarettes without a license”—is only a law, and not a request, because it carries with it a threat: If you disobey, people working for the state are permitted, or compelled, to command you again, and then punish you if you persist. Or, in some cases, regardless of if you persist.

That punishment might be a fine, of course. And for the sale of unlicensed cigarettes, it certainly wasn't on the books to murder the offender during arrest. But every law, to effectively be a law, must carry with it the threat of violence—and the understanding that the threat is serious and will be carried through.

Take a pretty innocent law. The owner of your favorite professional football team talks the elected leaders of your city into forking over tax dollars to help him with the costs of a new stadium. You like the team, but you know the owner is quite rich. He certainly has more disposable income than you do. Furthermore, you’re skeptical of claims that the spent tax dollars will be more than made up for in additional revenue from additional business the stadium will bring to the city. (You should be.) The city’s passed a law saying your taxes are going up by some amount, though. That’s a command. What happens if you disobey? What happens if you withhold an amount in taxes equal to what the city wants you to give to the rich football owner?

Obviously, they might not catch you, in which case nothing happens. But what if they do? You might get told to make an adjustment in how much you’ve paid. But what if you refuse? They could issue you a fine. But what if you don’t pay? Eventually, men with guns will show up at your door and demand you come with them. But you say no. At this point, they’ll use physical force—violence—to make you anyway. Resist and that violence will ratchet up. Resist hard enough and they very well could kill you.

And it’s like this for every law. Jaywalking? There’s violence waiting for you if you persist. A ban on incandescent light bulbs? Better hope they don’t catch you stockpiling them. It’s easy to overlook—or imagine away—the necessary connection between law and violence because most people obey, and if you’re fortunate enough to live in a low crime area, you don’t typically see what happens when people don’t. Still, every time you say “There oughta be a law...,” what you’re actually saying is “Men with guns should use violence against people who do (or don’t)...”

But so what? Even if we don’t typically think through all the processes behind each law all the way down to the violent foundations, what does doing so matter? Surely we need laws. We can’t live in a state of lawlessness, and if occasional violence in their enforcement is the price we pay for a lawful society, it’s a price more than worth it, no?

Here’s why thinking of law as violence matters: It gives us a clearer sense of what it is we’re doing when we do political things. The far right tends to get this. They recognize that state action is grounded in violence, but they celebrate it. They fantasize about directing that violence at their enemies, or out groups, or disfavored racial and ethnic categories. They cheer when cops bust heads, or when their presidential candidates promise carnage if they don’t get their way. They have no illusions about this aspect of the state’s nature, but they have a profoundly unethical perspective on what we should make of that.

The left, on the other hand, particularly the progressive left, tends to pretend the state is something different. (The Marxist/Lenist or Maoist far left, on the other hand, frequently looks more like the far right when it comes to recognizing the state’s violence.) Progressives and mainstream left leaners, unlike the far right, either believe, or tell themselves they believe, Barney Frank’s line. Laws are just us working together for a common purpose, and when things get violent, it's a deviation from the ideal. Banning the sale of unlicensed cigarettes isn’t a demand the state threaten violence, it’s a recognition that cigarettes are bad for you and the world would be better if they were less accessible.

Neither this, or the glorying in violence of the far right, are ethical perspectives to take when engaging in politics. Not because the only ethical answer is to abolish all violence. It’s reasonable to conclude that at least some laws are necessary, and those laws must necessarily be backed by force. Rather, an ethical perspective on politics sees the state for what it is, while also believing that there’s nothing glorious or praiseworthy about the infliction of violence, and so proceeds cautiously and mindfully in political action so as not to create more violence than absolutely necessary.

This leaves open a lot of questions about institutions and justice, and that’s okay. We needn’t answer those now, if they can ever be fully answered at all. What’s key is that getting those questions right, and arriving at good answers, must begin with an ethical perspective on what we’re doing in the first place. If we ignore the nature of this powerful tool we bring to bear on social problems, if we cover our eyes or stick our fingers in our ears about the necessary violence of political action, then we will inadvertently or callously inflict too much, or set up and enable a system far more violent than we desire. Or, if we see the state for what it is, but relish that, then we will direct government not to just ends, but to evil ones.

A Politics of Harmlessness

We should aim for harmlessness. No matter what else we set ourselves to throughout our lives, and no matter what other values and principles we might hold to, unpinning and informing it all must be a commitment to refrain from causing harm.

Sometimes harm is unavoidable, of course. We can find ourselves in tragic situations where we have no choice but to inflict suffering, if only as an alternative to inflicting greater suffering through a different course of action. The world isn’t perfect and it isn’t fair. But our intent should be harmlessness, nonetheless. And this intent, if we take it seriously, means developing an awareness of which actions are harmful, and which harmless.

There are obvious examples: Don’t hit people. Don’t take their stuff. These fall under what we typically think of as rights violations. But refraining from violating strict rights—however we might define and delineate those—isn’t all that’s necessary to avoid inflicting harm. We can be cruel without transgressing rights. We can make others’ lives worse without hitting them, and without taking their stuff. To the extent that our actions affect others, we should strive to make sure those effects make their lives better.

Our perspective of harmlessness shouldn’t be limited to an outward direction, either. Just as we shouldn’t harm others, we also shouldn’t harm ourselves. The world inevitably inflicts pain upon us. We will get sick, grow old, and die. But so much of what we suffer through in our lives, so much of the unease and stress we feel, isn’t because of something the world has done to us, but rather the way we interact with the world and react to it. If you fall down and skin your knee, that’s unquestionably painful, and that’s a pain you’ll have to bear for as long as it takes the wound to heal. That’s a fact of the world, and not within your control. What is within your control, however, is how you respond to injury and discomfort. Do you accept it and move on? Or do you instead curse fate for giving you this skinned knee, dwell on the pain, and give the wound your full attention, letting it consume you, until finally, begrudgingly letting go?

This is how we fail in turning that principle of harmlessness inwards. We give in to aversion, focusing our attention on the pain and the unfairness of the injury. We give in to craving, longing for when we were healthy and letting that longing take over our mental landscape. We give in to a delusion that, if only we could exercise more control, and if only the world weren’t so unfair, we wouldn’t get skinned knees in the first place. And the suffering we inflict on others is just as much a product of aversion, craving, and delusion. We want them to behave in just the way that conforms to our tastes, or we want them to stop acting in ways that make us uncomfortable, or we want the world to stay the way it is (or, more often, the way it was when we were young and imagined ourselves to be happier and more stable), and demand that its relentless dynamism just stop.

The path to harmlessness, then, the path out of aversion, craving, and delusion. With those set aside, we can better see which of our actions and beliefs are harmless and wholesome, and which are harmful and unwholesome.

With that perspective in mind, we can turn to the question of politics, and what a politics that takes harmlessness seriously might look like. And the clearest way to approach answering that is by looking at politics as it usually manifests, and how it is the institutional and social embodiment of the very aversion, craving, and delusion we need to abandon. Because, to an important extent, a politics of harmlessness just is a political culture not bound up in those traits that are the genesis of harm.

Social contract theory tells a neat story about the origin of government, and so the origin of politics. We used to live in a state of nature, free to do as we pleased, without any protection from those whose “as they pleased” was to hurt us. Thus we agreed it would be better if, instead, we established a ruler who would then protect us from violence, theft, and so on—and at the reasonable cost of giving up just a little of our freedom, and a little of our resources. And, unless this ruler was an autocrat, he’d be influenced by, to one degree or another, the will of his subjects. Hence, politics.

Whether this narrative has any historical truth (it doesn’t) is beside the point. What’s important is that it neatly frames a helpful way to conceptualize government (a powerful tool for guiding the behavior of humans, using violence and threats of violence as its mechanism for enforcing that guidance) and politics (the ways the people go about directing the use of that tool).

Unfortunately—or, in many cases, tragically—that tool gets directed under the influence of aversion, craving, and ignorance. We use the government not (just) to reduce harm, but to inflict it. If that group of people over there—who are not themselves directly harming anyone, but are instead just behaving or expressing themselves in ways different from what we might prefer—are making us uncomfortable, or clashing with what (we imagine to be) our “traditional” values, we use the political process to direct the state to punish them. We declare their expression (or their different religion, or their way of living) illegal, and then let the state enforce the law against them. Or if the cultural patterns we grew up with and are used to give way to evolution and change (as all cultural patterns inevitably do), we insist that the government step in to punish those at the forefront of that change, or subsidize those seeking to hold it back. We give up on the notion of harmlessness because, consumed by aversion, craving, and delusion, we convince ourselves that this time, this bit of violence, isn’t harm, or perhaps justified harm, and we use politics to carry it out in our name.

This sort of thinking infects so much of politics that it can be difficult to imagine politics without it. It is so common that most people don’t even recognize it as harmful. Rather, they view it simply as what politics is for. They lose sight of harmlessness and, either from ill-will or simple ignorance, inflict terrible suffering upon each other. A politics of harmlessness would reject those uses of government. It would recognize that happiness depends upon non-harming, and that our own contentment is not more valuable than the contentment of everyone else we share the country (and the world) with. It would respect the dynamism of human societies and cultures, and the right of everyone to peacefully pursue their favored lifestyles, religions, modes of expression, and preferences. And it would see clearly how the attempt to prevent that dynamism and diversity—an attempt always destined to fail—can only bring greater suffering, not just to others, but to ourselves.

Originally published at Students for Liberty’s Bastiat Scrolls.

Why AI is the New Sliced Bread

You’re an artisan. You’ve spent years or decades cultivating your craft, drawing knowledge and inspiration from family, teachers, peers, and artistic heroes. Inspired by them, you learned, practiced, and developed, hoping that someday you can leverage your passion, skills, and creativity into not just a remunerative career, but an emotionally and spiritually fulfilling one. But then someone invents a way to automate what you do, to have machines carry out the same craft. The results aren’t quite as good as what artisans like you can do, but they are good enough to meet most people’s needs—and, even more damaging to your dreams, they are considerably cheaper. You can only work so fast, and thus need to charge a pretty high price for your work to keep a roof over your head, the lights on, and your stomach full. The machines work far quicker, producing in mass, and you now justifiably fear you’ll never be able to make your art more than a hobby.

To make matters worse, people you know, or bump into on the street, or come across chatting about it, are consuming the product of these machines, showing it off to each other, and discussing how much they like it. These machines that only work because their designers stole the knowledge people like you have been building over generations, watching what you do and how you do it, so the machines can reproduce the outputs of that knowledge and craft. And all without compensating you, or any of your peers, and all in the service of producing soulless simulacra of your art.

I’m speaking, of course, of baking bread.

There was a time when every loaf of bread had abaker: an actual, living, breathing person who used his or her hands tomix the ingredients, knead the dough, shape it, and bake it. They learned this craft from other bakers, in an unbroken chain back to the very invention of bread, each refining his or her skills, innovating, bringing that human element that made every loaf unique. Then came factory baking, and, in 1912, Otto Frederick Rohwedder invented the automated slicing machine. Today, you can still find artisanal bakers, of course, but most every loaf of bread, bought by most everyone in the developed world, came from a machine. Each of those consumers could have, if they cared enough about the artists who’d lost out in this transformation, instead paid a little or a lot more to get a loaf from an artisan.

Even you. Because—whether you’re an artist, a writer, or something else entirely—I bet you buy the majority, if not all, of your bread from that aisle in the grocery store stocked with nothing but the products of automating machines. You’re part of the problem. Not just by giving your money to corporations for their soulless product instead of an artisan for her soulful loaf, but also because every factory baked good you buy is the result of uncompensated use of the knowledge and craft developed by bakers going back generations.

Yet this probably doesn’t bother you. First because you’re used to benefiting from technologies drawing on what came before, and second because, while you might agree that the artisanal loaf tastes a bit better, you’d rather spend your money on other things. If you had no choice but to buy the more expensive bread from the small bake shop down the street, you’d simply buy less bread.

The most common arguments against the use of LLM technology—the chatbots like ChatGPT that produce text from a prompt, or the image generators like Midjourney that produce visual works—take a few forms. First, that these technologies depend upon learning from the work of human artists and writers, and those artists and writers weren’t compensated, or weren’t compensated fairly, for the use of their creations as training material. Second, even if they were, it would be wrong to use ChatGPT to generate text or Midjourney to generate images because doing so takes business away from human artists and writers.

What’s more, the resulting prose and pictures aren’t bad just because they’re so cheaply produced, or because they are patterned after the uncompensated knowledge and skill of humans, but also because they’re shameful. They lack the human element: the creativity and spark found in prose and imagery made by humans. Thus the person generating or using these knock-offs is triply wrong: he’s benefiting from stolen goods, he’s depriving a human of money, and he’s fundamentally misunderstanding the nature of the thing he’s consuming.

All this sounds plausible when we’re talking about art and writing, because art and writing are artistic. They’re the kind of work creative people do. We view them as different, somehow, from the craft of shoes, phones, aluminum pipe, perfect bound paperbacks—or bread. They’re elevated, while the work of others is not. So while it is fine to eat Pepperidge Farm wheat bread, and it’s fine to put your dinner on a machine cut Ikea table, it’s not fine—in fact, it’s morally wrong, and outright repugnant—to stick a Midjourney generated image on the cover of something, or to use ChatGPT to write some marketing copy.

But I don’t know if artisan bakers agree that their craft isn’t as elevated, as quintessentially human, as writing or drawing. I don’t know that the artisanal cobblers or carpenters do, either. So what’s different? Why can artists and writers happily save a few bucks buying from Pepperidge Farm, Nike, or Ikea, but when the technology instead allows the rest of us to save on images and prose—or to be able to have custom images and prose at all, given how the cost of an artist or writer is often well outside our reach—that technology is immoral, dangerous, and in need of shutting down?

An argument for that distinction might exist. There might be good reason to think this time the automation is different. But the problem with so many arguments against AI technology is that they don’t address the distinction, and instead either wave it away or ignore it entirely. This makes the arguments look less like principled stands, and more like self-interested protectionism, and self-serving snobbishness.

If artists and writers complaining about AI don’t tackle that question head on—don’t give an account of why their creativity mustn’t be automated, but everyone else’s creativity is fair game—if they instead just assume the obviousness of a difference, then the rest of us can rightly ask them why they’re so callously and selfishly hurting bakers by purchasing factory produced bread.

Against a Life of Moderation

Last Sunday, Aaron Bushnell set himself on fire, his death an act of protest against Israel’s actions in Gaza. This self-immolation provoked much conversation, as was its intent, and one entry into that discourse comes from Graeme Wood at The Atlantic. His claim is that self-immolation is bad as a form of protest, because it is ineffective and socially contagious, and also, if the act is to be carried out, it should only be in service of a particularly acute and worthy cause, and Aaron Bushnell’s was not.

Many have attacked the main thrust of Wood’s argument, but I’m not not interested in doing so here. Rather, what I do want to explore is one point he makes in his essay that is both factually wrong and wrong in a way that speaks to the ethical perspective I write about and apply to contemporary social and political issues.

Wood frames his case by citing the most famous example of self-immolation: “In 1963, the monk Thich Quang Duc soaked himself in gasoline and lit himself on fire to protest the government of the Vietnamese leader Ngo Dinh Diem.” You’ve probably seen the photograph. Wood proceeds to discuss Bushnell within a Buddhist context, noting that he “described himself as an anarchist, and generally eschewed what Buddhists might call ‘the middle way,’ a life of mindful moderation, in favor of extreme spiritual and political practice.”

The problem with this line is that Wood is simply wrong about what Buddhists mean by the Middle Way. This matters, even if you aren’t a Buddhist, not just because we should try to be accurate in our understanding of ideas, but also because the Middle Way is an important philosophical insight regarding the nature of an ethical life. Here especially, Wood’s misrepresentation of the Middle Way has an ideological and political root, and one genuine social and political progress depends upon avoiding.

By his characterization, it is something like the common mischaracterization of Aristotle’s “golden mean”: That the virtuous act is found in the middle between extremes. That’s not what Aristotle meant, of course, and instead advised us to use our wisdom to identify not the middle between two non-virtuous poles, but the correct position between them, and with “correct” varying based on the particulars of the situation. Sometimes anger is the wrong response, but sometimes not getting angry is a sign of lack of virtue or understanding.

Likewise, Buddhism’s Middle Way is not a call for us to always stake out a moderate position, and so to avoid radicalism in our spiritual practice. Instead, the term originates in the story of the Buddha’s quest for awakening. At the time, there were quite a lot of wandering mendicants and ascetics seeking enlightenment, and with diverse methods. A popular one was self-mortification. You effectively starved yourself as a way to turn away from desire and craving, to abandon this worldly body for spiritual pursuits. The Buddha tried that and it made him miserable without bringing him any closer to finding an end to suffering, the goal he had set himself at the start of his quest.

Next to the self-mortifiers, plenty of people then, as now, saw the path out of suffering as one of extreme self-indulgence and hedonism. The Buddha rejected that, as well. Instead, he mapped out a Middle Way that did not depend upon starving oneself, but also recognized the causes of suffering in the very sort of craving the hedonists embraced. As it’s put in an early Buddhist text, “[T]hese two extremes should not be cultivated… What two? Indulgence in sensual pleasures, which is low, crude, ordinary, ignoble, and pointless. And indulgence in self-mortification, which is painful, ignoble, and pointless. Avoiding these two extremes, the [Buddha] woke up by understanding the middle way of practice...” Rather than focusing the quest out of suffering on destroying the body, or on giving in to our craving for pleasure, the Buddha offered a new, but still radical focus: the intentional, careful, and diligent cultivation of a perspective on, and understanding of, ourselves and the world such that we could set aside the craving, aversion, and ignorance that are in fact the source of our suffering.

The term was also applied to the Buddha’s approach of not taking a position on specific metaphysical questions, such as whether the cosmos is eternal or infinite, or whether the soul and body are the same thing. Because his goal was the end of suffering, and because achieving that goal is demanding and time-consuming, he again counseled focusing instead on what, from that radical perspective, was relevant to accomplishing that.

Later, “Middle Way” took on a third meaning, in the branch of Buddhist philosophy known as the Mahayana. Here it meant an understanding of “emptiness,” or the notion that all things are empty of intrinsic existence and nature. By focusing on this (again, quite radical and extreme) idea, and by coming to understand it, awakening could be achieved.

None of these three approaches to the Middle Way are “a life of mindful moderation,” as Wood puts it. None are rejections the extreme, because all of them, by the standards of the day, were extreme. The Eightfold Path, the practice at the heart of Buddhism is such a stark departure, and a demanding one, from the way most people live that there’s little way to avoid describing it as the sort of “extreme spiritual ... practice” Wood tells us Buddhism rejects. Buddhism is far from moderate.

Instead, the Middle Way is a radical and extreme focus on what matters, in both one’s internal perspective and outward spiritual and ethical persuits.

Put another way, if Wood is right in his characterization of the Middle Way, then the Buddha was a bad Buddhist, and so were most of the major figures in the history of Buddhism. The Buddha did not set himself on fire, but he did take on the radical lifestyle of a wandering mendicant, owning nothing but a robe and an alms bowl, and begging for his food. And while he didn’t engage directly in explicit political discussion or participation, he did, quite radically, refuse to accept the natural superiority of the brahmin class, and thus the basis of the Indian caste system. I feel safe in assuming his Indian contemporaries found that quite politically extreme.

Whether Aaron Bushnell was right to take his own life, whether it was wise from the perspective of his cause to do so in such a public and shocking manner, and whether that cause was just, are questions worth answering. What we can say with certainty, however, is that Buddhism, as a comprehensive ethic for how to live well, do good, and find happiness, isn’t a moderate position or lifestyle. And the fact that it isn’t tells us something important about the need to be radical—extreme, even—in the standards we set for ourselves, and the demands we make for a better, more peaceful and compassionate world.

I’ve been writing an ongoing series setting out the core ideas of Buddhist philosophy and ethics, if you’d like to learn more, and get a sense of Buddhism’s radicalism, start here, then read this, and finally this.

Substack Doesn't Want You to Leave

This is a newsletter primarily about political and social ethics, but it’s also a newsletter that switched away from Substack a couple of months ago, for reasons related to the way Substack leadership runs their company. Those are bound up in ethics, even if not in the way I typically write about, and I also know that a lot of my readers have their own newsletters, and so are interested in the state of that ecosystem. So bear with me as I dig into a very odd new feature Substack proudly announced today: Direct messaging. From their post:

Direct messages live in the Chat tab on the Substack website and app, and appear alongside chats from your subscribed publications. You can start a DM from any writer or reader’s profile page, from the Chat tab, or by hitting “Share” on a post or a note. When you receive a DM, you’ll be notified in the app and by email.

Why would Substack add “direct messaging,” when, as a reader, you can already direct message with Substack authors you subscribe to by just hitting “Reply” in your email? Why add a new direct messaging system within the official Substack app?

First, it’s important to understand how Substack’s newsletter infrastructure works. When you start a Substack, and people subscribe for free, they’re adding their email addresses to your list. That list, whether it's a handful of your friends or tens of thousands of addresses, belongs to you. You can, at any time, download the list and, if you want to, upload it to another newsletter platform, such as Ghost or Beehiiv. In other words, the main connection point you have with your readers (their email address) is portable. You can leave Substack, take your list elsewhere, and continue sending newsletters to your readers, without them having to resubscribe to you.

Just as important to understanding what’s happening with Substack is how their system handles paying subscribers. With free subscribers, you have a connection point: an email address. With paying, you have two: an email address and a credit card number. That’s why, if you turn on premium subscriptions in Substack, you have to link your Substack publication to a Stripe account. Stripe is the big player in online credit card processing. As a reader, when you pay for a Substack, you’re giving your credit card information to Stripe, and it’s Stripe that stores it, not Substack. And here’s the important bit: Most other newsletter platforms use Stripe for handling paying subscribers, too. The result is that, as a Substack publisher, even your paying subscribers are portable. Want to move to Ghost or Beehiiv? Download your Substack subscriber list, import it into the new platform, and then attach that new platform to the same Stripe account you were using for Substack. Run the handy automated migration tool your new platform comes with, and your paying subscribers will keep getting your premium content—and keep giving you money each month—without any need for them to resubscribe.

That’s a big deal. It means that, as a publisher, you aren’t locked in to Substack. You can use it to build a huge platform, or even a huge business, and then just ... leave. It doesn’t cost you anything to do that, and the whole process takes maybe a couple of hours. That’s good for publishers, because platform lock-in is bad. And it’s a selling point for Substack, because they can say to writers, “We’re giving you great tools, and you can use them as long as you like them, and as long as you think they’re worth 10% of your earnings, but if you decide you’d rather be elsewhere, we’re going to make it easy and painless for you to switch.”

Nothing Substack has done lately changes any of that. You can still build a large audience there, and have a large number of them paying you, and then take it all with you if you jump ship. But that’s not the whole story. Because the other half of it, and the other key to understanding why Substack would add “direct messages” in its app, is Substack’s financials.

Running a platform like Substack isn’t cheap. You’ve got staff, infrastructure, and the costs of sending untold millions of emails. This is made worse by the fact that Substack gives most of it away for free. This is why they’ve grown so quickly and become so dominant. Unlike nearly every other newsletter platform, they don’t charge you anything to host a free newsletter, even if you have thousands or hundreds of thousands of subscribers. And if you do turn on premium subscriptions, their pricing model is to take 10% of your earnings, instead of asking for a fixed monthly fee. But this also means Substack needs to earn a lot more from each of the people who are running paid newsletters than other platforms, because it’s giving away so much for free to those users who aren’t. (My own Beehiiv, in contrast, offers its platform for free if you have 2,500 or fewer subscribers and don’t charge for subscriptions, and then a flat $50 or $100 a month if you want to change, and depending on which features you need. But it doesn’t take any cut of your earnings beyond that.)

This all works if Substack is making a lot of money. But they’re not. In their prior feature announcement, made on February 22 and introducing an upgraded recommendations system, they included this quite telling line: “In its short life, Recommendations has become an indispensable part of the Substack network, which in recent months has swelled to more than 3 million paid subscriptions to writers and creators.”

There are 3 million total paid subscriptions across Substack’s network. Now, 3 million is a big number, sure. But what does that mean for Substack's actual earnings? Not good news. Let’s ballpark it by assuming that the typical subscription fee newsletters charge is $8 a month. That’s the default Substack sets the price at when you turn on premium subscriptions, so it’s probably a pretty good guess. And let’s also assume, to make the math easy and to paint a rosier picture for Substack, that all of those 3 million subscribers are paying monthly, instead of getting the annual discount.

Three million subscribers paying $8 a month works out to $288 million a year flowing into the Substack network. But the company itself pockets just 10% of that, because the other 90% goes to the writers generating those subscriptions. So Substack’s actual take is, by this estimate, $28 million dollars a year. Which is tiny. Even if $8 is lowballing the typical subscription fee (which I suspect it isn’t, because my strong guess is the typical fee is closer to $5 a month), by any reasonable measure Substack isn’t bringing in enough dollars to cover their costs.

With that in mind, go back to the discussion of portability. If you’re a big publisher on Substack, generating let’s say $10,000 a month in subscription fees, you’re taking home $9,000 and giving Substack $1,000. (You’re not actually taking home $9,000, because you’re also paying credit card processing fees to Stripe, but we can ignore that for purposes of the analysis here.) That’s $12,000 dollars a year you’re paying to host your newsletter on Substack. If you went to Beehiiv, and chose their most expensive plan, you’d be paying $1,200 a year.

What this means is that, the bigger your newsletter gets on Substack, the more reason you have to leave Substack, because you can save an extraordinary amount of money by doing so. And, as we discussed above, it is very easy to take the email list and paying subscribers you built on Substack and do just that. You’re not locked in, and the more money you earn for Substack, the more reason you have to stop using it.

To fix that and get their financials on a steadier path, Substack needs to make it harder for you to leave. They need to lock you in. Now, they’re not going to prevent you from downloading your email list, nor are they going to force you off of Stripe. (They might at some point, if things get bad enough, but their whole brand is built on not doing so, which makes it feel like a last resort move.) Instead, the way they build lock-in is by shifting the primary points of connection between you and your readers from open and portable email addresses to closed and proprietary relationships.

And, thus, “direct messages.” Thus the “followers” you get in Substack Notes. These are points of contact between you and your readers that aren’t portable. If you leave Substack, you’ll lose them. The hub of all of this is the Substack app, and a great deal of the feature announcements they’ve made lately are about getting more and more people to view the app as the primary way they read Substacks and interact with Substack authors. The more publishers rely on the app instead of email as the basis of their relationship with their readers, the harder it will be for them to move their newsletters off of Substack’s platform.

Direct messages aren’t about giving you a better way to connect with your readers. They’re about shifting your relationship with your readers into a closed system so you’re less likely to flee to platforms that don’t take one dollar out of every ten you earn.

How to Talk Yourself into Defending Nonsense

If you spend enough time in online political spaces, particularly those populated by pundits and intellectuals, you’ll inevitably run across the phenomenon of an otherwise very smart and educated person arguing forcefully for obviously wrong, and often entirely nonsensical, claims. It’s perplexing, and typically a bit embarrassing, but the mechanism by which it happens is relatively straightforward. And it’s worth being aware of so that we can better catch ourselves before we head down a similar path.

The process starts by going outside one’s wheelhouse. Even the smartest of us have some areas of knowledge in which we might be experts, but plenty in which we’re not. But because politics touches on, well, everything, and so talking about politics broadly means having to talk about anything, the incentive to drift outside of one’s domain of expertise when engaging in the discussion of political (or politicized) ideas is strong. Further, the way you stand out from the pack in intellectual spaces is by making unorthodox claims. If you argue conventional wisdom, you’re one among many. If you argue unconventional wisdom, you’re closer to unique. This doesn’t mean conventional wisdom is always right, or that radically departing from it is always a sign that your argument is bad. Instead, there are strong incentives to present positions going against the grain if you are motivated by a desire to provoke discussion and debate.

If your unconventional or controversial argument gets attention, then the next step is certain to follow: You’ll get criticism from people who disagree with you, and many of them will be people who have more knowledge of whatever the matter or topic is than you do. It’s not your area of expertise, after all, but it is theirs. However, because we’re talking politically charged questions here, and because you staked out a position different from the recognized experts, it’s likely not just that they disagree with your take, but that their politics are different from yours, as well. Thus the people criticizing you are, politically, from the other side.

This leads to the third step. Because you’ve made a political argument, and because the people criticizing your argument also disagree with your politics, it’s easy to convince yourself that they are motivated primarily or entirely by their politics. It’s not that they just happen to know more than you do about whatever it is you staked out a position on, and so are able to spot errors in your argument that you didn’t see, but that even the appearance of objective criticism on their part is just that, an appearance, when really what’s going on is they don’t like the political position your argument props up, and so are attaching it for disingenuous, and merely political, reasons. And if that’s the case, then the content of their criticism is beside the point, and not worth taking seriously.

What’s more, the very fact that they rushed to criticize your argument, instead of simply ignoring it, is evidence that they feel threatened by it. And because this is all in a political context, the fact that they feel threatened by it is itself evidence that the argument is threatening to their politics. In other words, it’s evidence that the argument is right. Which means you shouldn’t revise or abandon it, but instead double down on it.

When described in this straightforward way, the errors in reasoning along the way, from arguing outside your area of expertise to digging in on what you ought to recognize as a poor argument, are pretty clear. But if you pay attention, you’ll notice this process playing out all the time. And if you care about getting your arguments right—not just as a way to win debates, but from a desire to hone in on truth—you’ll put in the effort, and cultivate intellectual humility, to stop yourself before you end up defending nonsense.

Living Well Means Recognizing Three Facts of Our Existence

If there’s a central trust to my writing lately, and to my broader scholarly project, it’s this: Talking about politics in isolation—talking about preferable policies, or the structure of political institutions—is valuable, but it misses most of why our contemporary political environment feels so toxic and dysfunctional, and why so many people feel so unhappy in our modern and politicized world. Rather, what’s really going on is a failure of ethics. Thus (just) getting the right policies in place, or tweaking our institutions here and there, won’t fix things. Instead, we need to begin with what it means to live ethically, understand the connection between that and happiness and contentment, and then how it all ties into being citizens of a dynamic, pluralistic, and, yes, political world.

That’s why my last handful of posts here have been a big step back, an attempt to sketch a general picture of ethics and the good life, so we have a framework through which to better understand specific social and political maladies. (For example, why social conservatism isn’t just the wrong way to view the ideal society, but also a cause of persistent suffering for social conservatives themselves.)

In “This is the Good Life,” I offered a very short, very high level summary of this ethical perspective, and in “Barriers in the Way of an Ethical Life,” I discussed some of the reasons we find it such a difficult goal to achieve. Today, I want to discuss the fundamental nature of humanity, the raw facts of our existence that form the initial substrate upon which our ethical quest must build.

All of us intuitively crave stability. Whether it’s the hope for lasting happiness, relationships that never change, or clinging to our sense of who we are, we have a deep discomfort with the reality of constant change. Yet, if we look honestly at life, both internally and externally, three truths present themselves: impermanence, the potential for dissatisfaction, and a profound lack of a fixed, independent self.

Impermanence. Everything is in flux. Relationships come and go, bodies age, even mountains, over vast scales of time, erode and vanish. Nothing stays the same: not circumstances, our physical being, or even our inner landscape of thoughts and emotions.

Dissatisfaction. A sense that something is always a bit “off” exists beneath the surface of our daily life. While joy and pleasure exist, even those experiences become tinged with dissatisfaction when we try to make them permanent. Life’s pleasures vanish quickly, or at least are constantly at risk of doing so. Yet, a deeper discontent arises from our very craving for things to be other than they are, creating conflict with the inevitable reality of change.

The Self. This is a tricky concept. There’s a feeling of “I” at the center of experience. However, what is this “I” when we closely examine it? Instead of permanent solidity, we find a dynamic stream of sensations, thoughts, memories, and self-images. Who we are right now is based on influences from our past, our biology, and a continuous process of change. The illusion of a static, autonomous “self” generates friction when the nature of life is constant flux.

These features of existence aren’t a cause for despair but an invitation to see reality clearly. It’s our resistance to the flow of impermanence and our search for lasting satisfaction in what won’t, and can’t, provide it that fuels our suffering. Clinging to this false idea of a separate, permanent self only reinforces this inner conflict.

This all sounds rather grim. But such an assessment deceives, because recognizing these features makes clear the path leading to acceptance of them, and the happiness that brings. Seeing impermanence allows us to appreciate fleeting moments without clinging to them. Acknowledging our dissatisfaction encourages us to investigate where we are seeking fulfillment in the wrong places. When we recognize the dynamic and impermanent nature of the self, it fosters humility and the possibility of living with less need to defend or inflate this fleeting construct we call “I.”

Understanding these marks takes time and mindful observation of our everyday lives. But through this process, we begin to live in greater harmony with the nature of reality. Instead of chasing an impossible permanence, true fulfillment might be found in accepting the fleeting flow, finding inner contentment, and cultivating a gentle, and less harmful, outlook.