ReImagining Liberty Blog

A blog about the emancipatory and cosmopolitan case for radical social, political, and economic liberalism. By Aaron Ross Powell.

Blog

What is Ethics?

I talk about ethics a lot around these parts, but realized I don’t have a go-to post explaining what I mean by the term. This is particularly important, because I use “ethics” as distinct from “morality,” while most of us treat them as synonyms. Defining terms matters when we’re thinking philosophically, so let’s correct that omission.

What is ethics? Most simply, it’s the set of principles (values, perspectives, beliefs, rules, character traits, etc.) that help us to live well. And “well” here doesn’t just mean, say, “pleasurably.” We can hedonistically enjoy ourselves while being unethical, or we can find pleasure in activities that are actually harmful, to ourselves or others. No, to live well means to be happy, to flourish, and to do so in a way that is admirable and blameless.

Morality, on the other hand, is about how we ought to act towards others. This makes morality part of ethics. Namely, the part that tells you how you should interact with other people. Or, typically, what the guidelines are for how not to interact with other people. Don’t hit them. Don’t steal from them. Don’t violate their rights or cause them unjustified harm.

This makes morality is other-regarding and action-focused. Ethics–which, again, is the broader concept within which morality is a part–is both other and self-regarding, and focuses not just on action, but also values, beliefs, and perspective.

It is thus possible to be moral, in the sense of behaving properly towards others, while being unethical, in the sense of still holding to values, beliefs, and perspectives that are harmful to one’s self (even if you aren’t aware of that), and lead to a world that is harmful towards others as well.

It is not possible to be ethical while being immoral. Ethics is the big picture of the kind of person you are, and so to say “I’m ethical, but also immoral” is incoherent. It would be like saying, “I’m good at playing baseball, but also I don’t know how to catch a ball.”

Moral theories–i.e., theories that give us clear rules for how to assess what action is right or wrong in a given circumstance–are never complete or perfectly guiding, either descriptively or normatively sense, in that we can always think of edge cases that break them. This is what Trolly Problems are all about. They’re descriptively imperfect in that no single set of rules or principles captures the whole of how we think about and judge moral questions, but will instead inevitably contain ambiguities. And they’re normatively imperfect in that no single set of rules or principles can give us actionable and correct guidance in every possible situation, because the world is more complex than the set of rules, and so there will inevitably be situations where we have to rely on judgement outside of the rote application of those rules.

This is part of why we need ethics. The values, perspectives, and beliefs that being ethical entails cultivating are what help us in those edge cases, where moral rules are unclear. If we focus on being ethical, we can be relatively confident that, in morally challenging situations, we will behave well. Or, at least, will behave more admirably than the person who hadn’t bothered to cultivate those traits of character, or who has internalized unethical values, perspectives, and beliefs.

Ethics is the whole ballgame. It’s how we live well, and how we do so in a way that is compatible with, and supportive of, everyone else doing the same.

The Politics of Broken Values and Warped Perspectives

It can be difficult to get a handle on just how bad our contemporary political environment has become and, more so, why. Everything just feels broken and so much of it driven not by a clash of philosophies, but instead by the rise and celebration of profoundly ugly values.

But it’s not just about ugly values. Yes, there are people out there who openly relish and venerate the worst behaviors and beliefs, and do so out of a self-conscious commitment to being the kind of person most of us, rightly, find unlikeable: Internet trolls, edgelords, dull comedians, duller fringe media figures, etc. We all know someone who hates the world and wants the world to hate them right back.

It’s tempting to view everyone who’s given into the ugliness as falling into that camp. To believe they know exactly what they’ve become, and what they’re demanding the country become along with them. But that’s a mistake. Most of them, if you ask, won’t readily admit the badness of their values, not like the trolls, edgelords, comedians, and fringe media figures will. Instead, they’ll genuinely view themselves as good and upstanding. Maybe a bit flawed, too, but who isn’t? Still, what’s worse, they’ll say, is what everyone else has become. Any ugliness they embrace in the political sphere is just a necessary feature of the kind of person or policy needed to push back on this cultural corruption, this drift away from how things used to be, back when the country had traditional values and not whatever it is we have today.

While I think it’s clear that quite a lot of those “traditional” values were themselves corrupt, and that social progress has come from abandoning them and the domination and hierarchies they insisted upon and reinforced, it’s important to note that it doesn’t feel that way to the people who hold fast to them. To understand how there can be that disconnect–how bad values can seem to the holder like good values–we need to introduce the idea of wholesome and unwholesome perspectives.

A perspective is how you see and make sense of the world and your place in it, and that seeing and making sense of isn’t just a product of the information you take in, but the attitudes and preferences, conscious or unconscious, you apply in working out the meaning of, and your response to, that information. If you are disposed to fearfulness, for example, then behaviors you see others engaging in will appear to you more threatening and worrying than if you are disposed to a more comfortable and contented perspective. And this can then lead you to lash out at harmless difference and benign change, and make you more susceptible to political leaders promising to use their authority to put a stop to it.

We can’t fully separate values and perspective, because they each form the other. The values you bring to understanding yourself and your world will inform what you find relevant, worthy of emphasis, or worthy of concern when you’re processing the environment around you–or what you’re hearing about that environment from television, politicians, and so on. At the same time, the perspective through which you view the world and yourself will lead you into habits of thought and behavior that dissolve, solidify, or shape the values you come to view as worth pursuing, cultivating, maintaining, or abandoning.

In particular, when I look out at the worst features of our political and social culture, the perspective I see dominating is one of aversion and clinging. It’s a perspective that says, “I don’t like this, and need to push it very far away, put a stop to it, or destroy it entirely.” And it’s a perspective that says, “I like this, and so demand that it dominate not just my life, but everyone else’s, and that it never change.” Taken together, these create a broader perspective of illiberalism, of seeing diversity, dynamism, individual choice, and social change not as features of freedom worth celebrating, but as threats to the one true and worthy way of life.

The trouble with this perspective of aversion and clinging isn’t just that it encourages you to behave badly towards people different from you, and to want to take away their freedom to live that difference in a peaceful way, but also that it leads to misery for yourself. It might feel like aversion and clinging are actually just the holding fast to old sources of wisdom and tried and true social arrangements and status rankings. But, as I’ve argued at length in another essay, they demand of the world what it cannot give. This perspective insists upon stasis, when everything is constantly in a state of change. And that disconnect between the way so many want the world to be and the way the world actually, inevitably, is then just creates needless suffering for the person unwilling to give up their aversion and clinging, and trapped in the ignorance of believing they can keep them while somehow achieving happiness.

Fortunately, we can change our perspective, and we can cultivate better values. It’s challenging. But if we are to live together and flourish, we have no choice.

A Change of Name

If you happened to visit the website for this newsletter today, you might’ve noticed things look a little different. I’ve been considering this shift for a while and finally went through with it: My newsletter is now called ReImagining Liberty.

It’s not a big change, no. But it does decouple things a bit from specifically me, and so gives some space to expand the publication to other writers in the future—a project I’m thinking through. I also want to tie the essays I write here more closely to the podcast I host of the same name.

The other piece of news is there’s now a way for you to earn free early access to episodes of that podcast. Just tell your friends about this newsletter, and if they sign-up, you’ll get free premium access. (Free subscribers count. You don’t need to talk your friends into paying for anything.)

That’s all for now. Thank you again for being a subscriber.

The Worst Argument in Politics

Plenty of people have strong opinions about politics, but most of them are quite bad at making arguments for their preferred policies. Often this is an information problem. Public policy is frequently complicated or complex. Complicated in the sense that, even when we can predict the outcomes of policy interventions, understanding all the moving parts in order to make accurate predictions demands quite a lot of knowledge—knowledge non-experts lack. Complex in the sense that many policy interventions have so many interacting aspects, in so many unknown or novel situations, that accurate predictions are, frankly, impossible—even for experts.

In both cases, even if you’re predictions will turn out wrong, or you can’t make a full prediction in the first place, you can at the very least mount an argument for why your preferred policy might be good, or why other policies might be bad. What this means is that engaging in political debate, if you want to do it well, demands a great deal of knowledge. But it also demands an ability to construct an argument with and around that knowledge. It’s here, in my fifteen years of being a professional in the world of politics, that I see the most failure. In short, a great many people are just terrible at constructing political arguments, or in assessing the quality of the arguments they draw on to support their side’s preferred policies.

And if there’s a single, most common way people go wrong in constructing political arguments, it’s in not asking themselves a simple question: “Does the argument I just gave prove too much?”

I’ve been meaning to talk about this variety of bad argument for a while, and was finally prompted to do so now because I came across a perfect example of it on Threads recently. It’s about religion, not politics, but it’s a particularly clear version of what I have in mind, so let’s start there.

There’s a lot wrong with this argument, but the problem I want to highlight is this: If we take this argument seriously, then it tells us that anyone who asserts the non-existence of anything is “contradicting themselves” and has “no solid foundation” for their claim that whatever it is doesn’t exist. Yes, it’s true that it’s difficult to turn up evidence that God doesn’t exist. But it’s also difficult to turn up evidence that leprechauns don’t exist. This isn’t to say that disproving God is as simple as disproving leprechauns, but rather than what sounds at first blush like a plausible argument that atheists are irrational is, instead, an argument that commits the person making it to a whole lot of conclusions they don’t believe.

The mistake, then, is in not asking, “What else does the argument I’ve just given prove?”

Now, maybe your argument is solid, and, even though it points to conclusions you don’t like, you’re willing to bite those bullets in the name of consistency. Or, you need to add additional steps to show why your argument leads to its conclusion in this particular case, but not in others where it also seems to apply. But to do either, you first need to acknowledge this feature of your argument. Instead, in my experience, most people who advance arguments that have this feature aren’t even aware of it. (You can see a version of this in many of the bad arguments against AI.) The result is that argument that look solid to the person making them turn out to be rather easy to dismiss.

Here’s an example: It’s common for people who oppose private schools, or who support socialized medicine, to argue that both education and healthcare are extremely important and necessary for a quality life. Because of this feature, they must be provided by the government—and not just paid for by the government, because otherwise tax credits for private schools, or subsidies for private insurance would be fine. And this sounds plausible. Except that if you stop there, you’ll need to answer why that argument doesn’t apply to everything else that’s also extremely important and necessary for quality of life. Few people who oppose private education or support socialized medicine also argue that the government should manufacture and distribute all the food we eat or the clothing we wear, but life is pretty hard without decent food and clothing.

So if you stop with “It’s important, therefore government should support it,” you’ve made an argument that doesn’t hold up to much scrutiny, unless you’re also arguing that government should give us our food and clothing, too. Just like arguing that the impossibility of proving non-existence means you should believe in God doesn’t hold up to much scrutiny unless you’re willing to accept this is also why you should believe in leprechauns.

If you don’t want your argument to pretty much fail on its face, you need to instead recognize when it looks to prove too much, and then give reasons why, in fact, it doesn’t. That’s how you build a better argument.

The Liberal Identity Crisis

One of the culture war’s contemporary tributaries is the conflict between establishment, older, New York Times type liberals concerned that the cultural left liberal youth have, in fact, become illiberal, and the cultural left liberal youth who lash out at those New York Times types as, in fact, right-wingers unwilling to admit it. Both sides claim the liberal label, and both sides claim that label doesn’t apply to the other side.

The Overton Paradox

Data scientist Allen Downey has an idea that offers a pretty compelling way to understand this disagreement. He calls it the “Overton Paradox.” Conventional wisdom is that people become more conservative as they age. And this is true in terms of self-identification. Older people are more likely to say they are conservative than younger people. But if you give them a series of questions intended to tease out their actual beliefs, instead of what they choose to call themselves, it turns out become slightly less conservative as they get older.

The way to solve this paradox becomes clear when we graph those survey responses to questions getting at conservative beliefs.

Social “liberalism” isn’t a fixed standard, but a relative one. Roughly speaking, to be a liberal is to be more socially tolerant, inclusive, and dynamic than the current social baseline, and to be socially conservative is to be less so. This means that an accurate application of the label can change, even if a person’s underlying beliefs have not. Someone who was justifiably called a radical social liberal in 1924 might well be seen as an extreme social conservative in 2024.

This makes intuitive sense. The world has clearly become more socially tolerant and open over time. And the way that happens is by each new generation pushing things forward, which means it is both socially liberal during its day, but from the perspective of future generations, it’s rather behind.

So the way to solve the paradox (that, on the one hand, it seems like people become more conservative as they get older, but on the other other hand, they don’t actually become more conservative), is to understand that the very same spread of social beliefs that makes you a liberal when you’re in your 20s might well mark you out as now a conservative when you’re in your 50s.

“I’m a liberal.” “No, I’m a liberal.”

Where this gets interesting in the context of the contemporary culture war is when the “liberal” label matters a lot to people. If you’re generally indifferent to politics—when you’re a normal person, in other words—you’re not terribly attached to political terms. Saying, “Sure, I was a liberal back then, but now I feel more like a conservative” doesn’t bother you.

But if you’re politically engaged, and particularly if your politics is your personal brand, then those labels matter a lot. Not just in terms of how others define you, but also in how you define yourself. Being a “liberal” (or a “conservative”) isn’t a mere descriptor, but your self-image. This is compounded if that label is also your professional identity—if it is effectively how you earn your living.

In practice, then, if you’re an older, established liberal, who has been writing from a self-consciously liberal perspective for your whole career, you’re going to get upset when the young people start telling you that, no, some of your views aren’t liberal at all, but instead quite reactionary. And this is most likely to happen around those issues where the culture has moved the fastest, such as acceptance and normalizing of LGBTQ+ people, identities, and expression.

This gets us to those accusations of illiberalism. You think of yourself as a pretty liberal person—and think of being a liberal has central to your identity. But now the young people are mad at you because your preferences are, by their estimation, behind the times in terms of your views on transgender people, or the appropriateness of racially charged humor. You know you’re a liberal, but those kids are telling you that you’re not a liberal, because they’re the actual liberals, and you’re not like them. But you are a liberal, you’re quite confident of that, and so it must be the kids who aren’t liberals. They’re illiberal, and their views thus don’t represent society moving in a more liberal direction, but in instead towards illiberalism.

The conflict appears to intractable because, in a meaningful sense, both sides are right. Each is liberal by the standards of the time they came to understand that term. And so, for both, the other side is not liberal by those same standards. What it means to be liberal isn’t fixed, but is constantly evolving, and what is the youth vanguard will eventually become the accepted mainstream, and then, decades further down the road, that accepted mainstream will become reasonably labeled “conservatism.” That’s just how social progress happens.

The Hidden Violence in Our Laws

A simple fact about the nature of government is this: It’s violence.

I know we live in a democracy where we decide together what the laws should be, and with the aim of achieving a better life for all. “Government is simply the name we give to the things we choose to do together,” Rep. Barney Frank (supposedly) once said. And we get angry when the state conducts itself violently, like when an NYPD police officer murdered Eric Garner.

But hear me out. Yes, most laws are created with the intent of improving the state of things. And, yes, police brutality isn’t a necessary feature of a well-run government. Yet the reason Daniel Pantaleo choked the life out of Garner is because Garner was selling unlicensed cigarettes. At some point, well-meaning people decided that, if we’re going to allow the sale of cigarettes, only some people will be allowed to sell them, and Eric Garner wasn’t one of those.

That law—“You must not sell cigarettes without a license”—is only a law, and not a request, because it carries with it a threat: If you disobey, people working for the state are permitted, or compelled, to command you again, and then punish you if you persist. Or, in some cases, regardless of if you persist.

That punishment might be a fine, of course. And for the sale of unlicensed cigarettes, it certainly wasn't on the books to murder the offender during arrest. But every law, to effectively be a law, must carry with it the threat of violence—and the understanding that the threat is serious and will be carried through.

Take a pretty innocent law. The owner of your favorite professional football team talks the elected leaders of your city into forking over tax dollars to help him with the costs of a new stadium. You like the team, but you know the owner is quite rich. He certainly has more disposable income than you do. Furthermore, you’re skeptical of claims that the spent tax dollars will be more than made up for in additional revenue from additional business the stadium will bring to the city. (You should be.) The city’s passed a law saying your taxes are going up by some amount, though. That’s a command. What happens if you disobey? What happens if you withhold an amount in taxes equal to what the city wants you to give to the rich football owner?

Obviously, they might not catch you, in which case nothing happens. But what if they do? You might get told to make an adjustment in how much you’ve paid. But what if you refuse? They could issue you a fine. But what if you don’t pay? Eventually, men with guns will show up at your door and demand you come with them. But you say no. At this point, they’ll use physical force—violence—to make you anyway. Resist and that violence will ratchet up. Resist hard enough and they very well could kill you.

And it’s like this for every law. Jaywalking? There’s violence waiting for you if you persist. A ban on incandescent light bulbs? Better hope they don’t catch you stockpiling them. It’s easy to overlook—or imagine away—the necessary connection between law and violence because most people obey, and if you’re fortunate enough to live in a low crime area, you don’t typically see what happens when people don’t. Still, every time you say “There oughta be a law...,” what you’re actually saying is “Men with guns should use violence against people who do (or don’t)...”

But so what? Even if we don’t typically think through all the processes behind each law all the way down to the violent foundations, what does doing so matter? Surely we need laws. We can’t live in a state of lawlessness, and if occasional violence in their enforcement is the price we pay for a lawful society, it’s a price more than worth it, no?

Here’s why thinking of law as violence matters: It gives us a clearer sense of what it is we’re doing when we do political things. The far right tends to get this. They recognize that state action is grounded in violence, but they celebrate it. They fantasize about directing that violence at their enemies, or out groups, or disfavored racial and ethnic categories. They cheer when cops bust heads, or when their presidential candidates promise carnage if they don’t get their way. They have no illusions about this aspect of the state’s nature, but they have a profoundly unethical perspective on what we should make of that.

The left, on the other hand, particularly the progressive left, tends to pretend the state is something different. (The Marxist/Lenist or Maoist far left, on the other hand, frequently looks more like the far right when it comes to recognizing the state’s violence.) Progressives and mainstream left leaners, unlike the far right, either believe, or tell themselves they believe, Barney Frank’s line. Laws are just us working together for a common purpose, and when things get violent, it's a deviation from the ideal. Banning the sale of unlicensed cigarettes isn’t a demand the state threaten violence, it’s a recognition that cigarettes are bad for you and the world would be better if they were less accessible.

Neither this, or the glorying in violence of the far right, are ethical perspectives to take when engaging in politics. Not because the only ethical answer is to abolish all violence. It’s reasonable to conclude that at least some laws are necessary, and those laws must necessarily be backed by force. Rather, an ethical perspective on politics sees the state for what it is, while also believing that there’s nothing glorious or praiseworthy about the infliction of violence, and so proceeds cautiously and mindfully in political action so as not to create more violence than absolutely necessary.

This leaves open a lot of questions about institutions and justice, and that’s okay. We needn’t answer those now, if they can ever be fully answered at all. What’s key is that getting those questions right, and arriving at good answers, must begin with an ethical perspective on what we’re doing in the first place. If we ignore the nature of this powerful tool we bring to bear on social problems, if we cover our eyes or stick our fingers in our ears about the necessary violence of political action, then we will inadvertently or callously inflict too much, or set up and enable a system far more violent than we desire. Or, if we see the state for what it is, but relish that, then we will direct government not to just ends, but to evil ones.

A Politics of Harmlessness

We should aim for harmlessness. No matter what else we set ourselves to throughout our lives, and no matter what other values and principles we might hold to, unpinning and informing it all must be a commitment to refrain from causing harm.

Sometimes harm is unavoidable, of course. We can find ourselves in tragic situations where we have no choice but to inflict suffering, if only as an alternative to inflicting greater suffering through a different course of action. The world isn’t perfect and it isn’t fair. But our intent should be harmlessness, nonetheless. And this intent, if we take it seriously, means developing an awareness of which actions are harmful, and which harmless.

There are obvious examples: Don’t hit people. Don’t take their stuff. These fall under what we typically think of as rights violations. But refraining from violating strict rights—however we might define and delineate those—isn’t all that’s necessary to avoid inflicting harm. We can be cruel without transgressing rights. We can make others’ lives worse without hitting them, and without taking their stuff. To the extent that our actions affect others, we should strive to make sure those effects make their lives better.

Our perspective of harmlessness shouldn’t be limited to an outward direction, either. Just as we shouldn’t harm others, we also shouldn’t harm ourselves. The world inevitably inflicts pain upon us. We will get sick, grow old, and die. But so much of what we suffer through in our lives, so much of the unease and stress we feel, isn’t because of something the world has done to us, but rather the way we interact with the world and react to it. If you fall down and skin your knee, that’s unquestionably painful, and that’s a pain you’ll have to bear for as long as it takes the wound to heal. That’s a fact of the world, and not within your control. What is within your control, however, is how you respond to injury and discomfort. Do you accept it and move on? Or do you instead curse fate for giving you this skinned knee, dwell on the pain, and give the wound your full attention, letting it consume you, until finally, begrudgingly letting go?

This is how we fail in turning that principle of harmlessness inwards. We give in to aversion, focusing our attention on the pain and the unfairness of the injury. We give in to craving, longing for when we were healthy and letting that longing take over our mental landscape. We give in to a delusion that, if only we could exercise more control, and if only the world weren’t so unfair, we wouldn’t get skinned knees in the first place. And the suffering we inflict on others is just as much a product of aversion, craving, and delusion. We want them to behave in just the way that conforms to our tastes, or we want them to stop acting in ways that make us uncomfortable, or we want the world to stay the way it is (or, more often, the way it was when we were young and imagined ourselves to be happier and more stable), and demand that its relentless dynamism just stop.

The path to harmlessness, then, the path out of aversion, craving, and delusion. With those set aside, we can better see which of our actions and beliefs are harmless and wholesome, and which are harmful and unwholesome.

With that perspective in mind, we can turn to the question of politics, and what a politics that takes harmlessness seriously might look like. And the clearest way to approach answering that is by looking at politics as it usually manifests, and how it is the institutional and social embodiment of the very aversion, craving, and delusion we need to abandon. Because, to an important extent, a politics of harmlessness just is a political culture not bound up in those traits that are the genesis of harm.

Social contract theory tells a neat story about the origin of government, and so the origin of politics. We used to live in a state of nature, free to do as we pleased, without any protection from those whose “as they pleased” was to hurt us. Thus we agreed it would be better if, instead, we established a ruler who would then protect us from violence, theft, and so on—and at the reasonable cost of giving up just a little of our freedom, and a little of our resources. And, unless this ruler was an autocrat, he’d be influenced by, to one degree or another, the will of his subjects. Hence, politics.

Whether this narrative has any historical truth (it doesn’t) is beside the point. What’s important is that it neatly frames a helpful way to conceptualize government (a powerful tool for guiding the behavior of humans, using violence and threats of violence as its mechanism for enforcing that guidance) and politics (the ways the people go about directing the use of that tool).

Unfortunately—or, in many cases, tragically—that tool gets directed under the influence of aversion, craving, and ignorance. We use the government not (just) to reduce harm, but to inflict it. If that group of people over there—who are not themselves directly harming anyone, but are instead just behaving or expressing themselves in ways different from what we might prefer—are making us uncomfortable, or clashing with what (we imagine to be) our “traditional” values, we use the political process to direct the state to punish them. We declare their expression (or their different religion, or their way of living) illegal, and then let the state enforce the law against them. Or if the cultural patterns we grew up with and are used to give way to evolution and change (as all cultural patterns inevitably do), we insist that the government step in to punish those at the forefront of that change, or subsidize those seeking to hold it back. We give up on the notion of harmlessness because, consumed by aversion, craving, and delusion, we convince ourselves that this time, this bit of violence, isn’t harm, or perhaps justified harm, and we use politics to carry it out in our name.

This sort of thinking infects so much of politics that it can be difficult to imagine politics without it. It is so common that most people don’t even recognize it as harmful. Rather, they view it simply as what politics is for. They lose sight of harmlessness and, either from ill-will or simple ignorance, inflict terrible suffering upon each other. A politics of harmlessness would reject those uses of government. It would recognize that happiness depends upon non-harming, and that our own contentment is not more valuable than the contentment of everyone else we share the country (and the world) with. It would respect the dynamism of human societies and cultures, and the right of everyone to peacefully pursue their favored lifestyles, religions, modes of expression, and preferences. And it would see clearly how the attempt to prevent that dynamism and diversity—an attempt always destined to fail—can only bring greater suffering, not just to others, but to ourselves.

Originally published at Students for Liberty’s Bastiat Scrolls.

Why AI is the New Sliced Bread

You’re an artisan. You’ve spent years or decades cultivating your craft, drawing knowledge and inspiration from family, teachers, peers, and artistic heroes. Inspired by them, you learned, practiced, and developed, hoping that someday you can leverage your passion, skills, and creativity into not just a remunerative career, but an emotionally and spiritually fulfilling one. But then someone invents a way to automate what you do, to have machines carry out the same craft. The results aren’t quite as good as what artisans like you can do, but they are good enough to meet most people’s needs—and, even more damaging to your dreams, they are considerably cheaper. You can only work so fast, and thus need to charge a pretty high price for your work to keep a roof over your head, the lights on, and your stomach full. The machines work far quicker, producing in mass, and you now justifiably fear you’ll never be able to make your art more than a hobby.

To make matters worse, people you know, or bump into on the street, or come across chatting about it, are consuming the product of these machines, showing it off to each other, and discussing how much they like it. These machines that only work because their designers stole the knowledge people like you have been building over generations, watching what you do and how you do it, so the machines can reproduce the outputs of that knowledge and craft. And all without compensating you, or any of your peers, and all in the service of producing soulless simulacra of your art.

I’m speaking, of course, of baking bread.

There was a time when every loaf of bread had abaker: an actual, living, breathing person who used his or her hands tomix the ingredients, knead the dough, shape it, and bake it. They learned this craft from other bakers, in an unbroken chain back to the very invention of bread, each refining his or her skills, innovating, bringing that human element that made every loaf unique. Then came factory baking, and, in 1912, Otto Frederick Rohwedder invented the automated slicing machine. Today, you can still find artisanal bakers, of course, but most every loaf of bread, bought by most everyone in the developed world, came from a machine. Each of those consumers could have, if they cared enough about the artists who’d lost out in this transformation, instead paid a little or a lot more to get a loaf from an artisan.

Even you. Because—whether you’re an artist, a writer, or something else entirely—I bet you buy the majority, if not all, of your bread from that aisle in the grocery store stocked with nothing but the products of automating machines. You’re part of the problem. Not just by giving your money to corporations for their soulless product instead of an artisan for her soulful loaf, but also because every factory baked good you buy is the result of uncompensated use of the knowledge and craft developed by bakers going back generations.

Yet this probably doesn’t bother you. First because you’re used to benefiting from technologies drawing on what came before, and second because, while you might agree that the artisanal loaf tastes a bit better, you’d rather spend your money on other things. If you had no choice but to buy the more expensive bread from the small bake shop down the street, you’d simply buy less bread.

The most common arguments against the use of LLM technology—the chatbots like ChatGPT that produce text from a prompt, or the image generators like Midjourney that produce visual works—take a few forms. First, that these technologies depend upon learning from the work of human artists and writers, and those artists and writers weren’t compensated, or weren’t compensated fairly, for the use of their creations as training material. Second, even if they were, it would be wrong to use ChatGPT to generate text or Midjourney to generate images because doing so takes business away from human artists and writers.

What’s more, the resulting prose and pictures aren’t bad just because they’re so cheaply produced, or because they are patterned after the uncompensated knowledge and skill of humans, but also because they’re shameful. They lack the human element: the creativity and spark found in prose and imagery made by humans. Thus the person generating or using these knock-offs is triply wrong: he’s benefiting from stolen goods, he’s depriving a human of money, and he’s fundamentally misunderstanding the nature of the thing he’s consuming.

All this sounds plausible when we’re talking about art and writing, because art and writing are artistic. They’re the kind of work creative people do. We view them as different, somehow, from the craft of shoes, phones, aluminum pipe, perfect bound paperbacks—or bread. They’re elevated, while the work of others is not. So while it is fine to eat Pepperidge Farm wheat bread, and it’s fine to put your dinner on a machine cut Ikea table, it’s not fine—in fact, it’s morally wrong, and outright repugnant—to stick a Midjourney generated image on the cover of something, or to use ChatGPT to write some marketing copy.

But I don’t know if artisan bakers agree that their craft isn’t as elevated, as quintessentially human, as writing or drawing. I don’t know that the artisanal cobblers or carpenters do, either. So what’s different? Why can artists and writers happily save a few bucks buying from Pepperidge Farm, Nike, or Ikea, but when the technology instead allows the rest of us to save on images and prose—or to be able to have custom images and prose at all, given how the cost of an artist or writer is often well outside our reach—that technology is immoral, dangerous, and in need of shutting down?

An argument for that distinction might exist. There might be good reason to think this time the automation is different. But the problem with so many arguments against AI technology is that they don’t address the distinction, and instead either wave it away or ignore it entirely. This makes the arguments look less like principled stands, and more like self-interested protectionism, and self-serving snobbishness.

If artists and writers complaining about AI don’t tackle that question head on—don’t give an account of why their creativity mustn’t be automated, but everyone else’s creativity is fair game—if they instead just assume the obviousness of a difference, then the rest of us can rightly ask them why they’re so callously and selfishly hurting bakers by purchasing factory produced bread.

Against a Life of Moderation

Last Sunday, Aaron Bushnell set himself on fire, his death an act of protest against Israel’s actions in Gaza. This self-immolation provoked much conversation, as was its intent, and one entry into that discourse comes from Graeme Wood at The Atlantic. His claim is that self-immolation is bad as a form of protest, because it is ineffective and socially contagious, and also, if the act is to be carried out, it should only be in service of a particularly acute and worthy cause, and Aaron Bushnell’s was not.

Many have attacked the main thrust of Wood’s argument, but I’m not not interested in doing so here. Rather, what I do want to explore is one point he makes in his essay that is both factually wrong and wrong in a way that speaks to the ethical perspective I write about and apply to contemporary social and political issues.

Wood frames his case by citing the most famous example of self-immolation: “In 1963, the monk Thich Quang Duc soaked himself in gasoline and lit himself on fire to protest the government of the Vietnamese leader Ngo Dinh Diem.” You’ve probably seen the photograph. Wood proceeds to discuss Bushnell within a Buddhist context, noting that he “described himself as an anarchist, and generally eschewed what Buddhists might call ‘the middle way,’ a life of mindful moderation, in favor of extreme spiritual and political practice.”

The problem with this line is that Wood is simply wrong about what Buddhists mean by the Middle Way. This matters, even if you aren’t a Buddhist, not just because we should try to be accurate in our understanding of ideas, but also because the Middle Way is an important philosophical insight regarding the nature of an ethical life. Here especially, Wood’s misrepresentation of the Middle Way has an ideological and political root, and one genuine social and political progress depends upon avoiding.

By his characterization, it is something like the common mischaracterization of Aristotle’s “golden mean”: That the virtuous act is found in the middle between extremes. That’s not what Aristotle meant, of course, and instead advised us to use our wisdom to identify not the middle between two non-virtuous poles, but the correct position between them, and with “correct” varying based on the particulars of the situation. Sometimes anger is the wrong response, but sometimes not getting angry is a sign of lack of virtue or understanding.

Likewise, Buddhism’s Middle Way is not a call for us to always stake out a moderate position, and so to avoid radicalism in our spiritual practice. Instead, the term originates in the story of the Buddha’s quest for awakening. At the time, there were quite a lot of wandering mendicants and ascetics seeking enlightenment, and with diverse methods. A popular one was self-mortification. You effectively starved yourself as a way to turn away from desire and craving, to abandon this worldly body for spiritual pursuits. The Buddha tried that and it made him miserable without bringing him any closer to finding an end to suffering, the goal he had set himself at the start of his quest.

Next to the self-mortifiers, plenty of people then, as now, saw the path out of suffering as one of extreme self-indulgence and hedonism. The Buddha rejected that, as well. Instead, he mapped out a Middle Way that did not depend upon starving oneself, but also recognized the causes of suffering in the very sort of craving the hedonists embraced. As it’s put in an early Buddhist text, “[T]hese two extremes should not be cultivated… What two? Indulgence in sensual pleasures, which is low, crude, ordinary, ignoble, and pointless. And indulgence in self-mortification, which is painful, ignoble, and pointless. Avoiding these two extremes, the [Buddha] woke up by understanding the middle way of practice...” Rather than focusing the quest out of suffering on destroying the body, or on giving in to our craving for pleasure, the Buddha offered a new, but still radical focus: the intentional, careful, and diligent cultivation of a perspective on, and understanding of, ourselves and the world such that we could set aside the craving, aversion, and ignorance that are in fact the source of our suffering.

The term was also applied to the Buddha’s approach of not taking a position on specific metaphysical questions, such as whether the cosmos is eternal or infinite, or whether the soul and body are the same thing. Because his goal was the end of suffering, and because achieving that goal is demanding and time-consuming, he again counseled focusing instead on what, from that radical perspective, was relevant to accomplishing that.

Later, “Middle Way” took on a third meaning, in the branch of Buddhist philosophy known as the Mahayana. Here it meant an understanding of “emptiness,” or the notion that all things are empty of intrinsic existence and nature. By focusing on this (again, quite radical and extreme) idea, and by coming to understand it, awakening could be achieved.

None of these three approaches to the Middle Way are “a life of mindful moderation,” as Wood puts it. None are rejections the extreme, because all of them, by the standards of the day, were extreme. The Eightfold Path, the practice at the heart of Buddhism is such a stark departure, and a demanding one, from the way most people live that there’s little way to avoid describing it as the sort of “extreme spiritual ... practice” Wood tells us Buddhism rejects. Buddhism is far from moderate.

Instead, the Middle Way is a radical and extreme focus on what matters, in both one’s internal perspective and outward spiritual and ethical persuits.

Put another way, if Wood is right in his characterization of the Middle Way, then the Buddha was a bad Buddhist, and so were most of the major figures in the history of Buddhism. The Buddha did not set himself on fire, but he did take on the radical lifestyle of a wandering mendicant, owning nothing but a robe and an alms bowl, and begging for his food. And while he didn’t engage directly in explicit political discussion or participation, he did, quite radically, refuse to accept the natural superiority of the brahmin class, and thus the basis of the Indian caste system. I feel safe in assuming his Indian contemporaries found that quite politically extreme.

Whether Aaron Bushnell was right to take his own life, whether it was wise from the perspective of his cause to do so in such a public and shocking manner, and whether that cause was just, are questions worth answering. What we can say with certainty, however, is that Buddhism, as a comprehensive ethic for how to live well, do good, and find happiness, isn’t a moderate position or lifestyle. And the fact that it isn’t tells us something important about the need to be radical—extreme, even—in the standards we set for ourselves, and the demands we make for a better, more peaceful and compassionate world.

I’ve been writing an ongoing series setting out the core ideas of Buddhist philosophy and ethics, if you’d like to learn more, and get a sense of Buddhism’s radicalism, start here, then read this, and finally this.

Substack Doesn't Want You to Leave

This is a newsletter primarily about political and social ethics, but it’s also a newsletter that switched away from Substack a couple of months ago, for reasons related to the way Substack leadership runs their company. Those are bound up in ethics, even if not in the way I typically write about, and I also know that a lot of my readers have their own newsletters, and so are interested in the state of that ecosystem. So bear with me as I dig into a very odd new feature Substack proudly announced today: Direct messaging. From their post:

Direct messages live in the Chat tab on the Substack website and app, and appear alongside chats from your subscribed publications. You can start a DM from any writer or reader’s profile page, from the Chat tab, or by hitting “Share” on a post or a note. When you receive a DM, you’ll be notified in the app and by email.

Why would Substack add “direct messaging,” when, as a reader, you can already direct message with Substack authors you subscribe to by just hitting “Reply” in your email? Why add a new direct messaging system within the official Substack app?

First, it’s important to understand how Substack’s newsletter infrastructure works. When you start a Substack, and people subscribe for free, they’re adding their email addresses to your list. That list, whether it's a handful of your friends or tens of thousands of addresses, belongs to you. You can, at any time, download the list and, if you want to, upload it to another newsletter platform, such as Ghost or Beehiiv. In other words, the main connection point you have with your readers (their email address) is portable. You can leave Substack, take your list elsewhere, and continue sending newsletters to your readers, without them having to resubscribe to you.

Just as important to understanding what’s happening with Substack is how their system handles paying subscribers. With free subscribers, you have a connection point: an email address. With paying, you have two: an email address and a credit card number. That’s why, if you turn on premium subscriptions in Substack, you have to link your Substack publication to a Stripe account. Stripe is the big player in online credit card processing. As a reader, when you pay for a Substack, you’re giving your credit card information to Stripe, and it’s Stripe that stores it, not Substack. And here’s the important bit: Most other newsletter platforms use Stripe for handling paying subscribers, too. The result is that, as a Substack publisher, even your paying subscribers are portable. Want to move to Ghost or Beehiiv? Download your Substack subscriber list, import it into the new platform, and then attach that new platform to the same Stripe account you were using for Substack. Run the handy automated migration tool your new platform comes with, and your paying subscribers will keep getting your premium content—and keep giving you money each month—without any need for them to resubscribe.

That’s a big deal. It means that, as a publisher, you aren’t locked in to Substack. You can use it to build a huge platform, or even a huge business, and then just ... leave. It doesn’t cost you anything to do that, and the whole process takes maybe a couple of hours. That’s good for publishers, because platform lock-in is bad. And it’s a selling point for Substack, because they can say to writers, “We’re giving you great tools, and you can use them as long as you like them, and as long as you think they’re worth 10% of your earnings, but if you decide you’d rather be elsewhere, we’re going to make it easy and painless for you to switch.”

Nothing Substack has done lately changes any of that. You can still build a large audience there, and have a large number of them paying you, and then take it all with you if you jump ship. But that’s not the whole story. Because the other half of it, and the other key to understanding why Substack would add “direct messages” in its app, is Substack’s financials.

Running a platform like Substack isn’t cheap. You’ve got staff, infrastructure, and the costs of sending untold millions of emails. This is made worse by the fact that Substack gives most of it away for free. This is why they’ve grown so quickly and become so dominant. Unlike nearly every other newsletter platform, they don’t charge you anything to host a free newsletter, even if you have thousands or hundreds of thousands of subscribers. And if you do turn on premium subscriptions, their pricing model is to take 10% of your earnings, instead of asking for a fixed monthly fee. But this also means Substack needs to earn a lot more from each of the people who are running paid newsletters than other platforms, because it’s giving away so much for free to those users who aren’t. (My own Beehiiv, in contrast, offers its platform for free if you have 2,500 or fewer subscribers and don’t charge for subscriptions, and then a flat $50 or $100 a month if you want to change, and depending on which features you need. But it doesn’t take any cut of your earnings beyond that.)

This all works if Substack is making a lot of money. But they’re not. In their prior feature announcement, made on February 22 and introducing an upgraded recommendations system, they included this quite telling line: “In its short life, Recommendations has become an indispensable part of the Substack network, which in recent months has swelled to more than 3 million paid subscriptions to writers and creators.”

There are 3 million total paid subscriptions across Substack’s network. Now, 3 million is a big number, sure. But what does that mean for Substack's actual earnings? Not good news. Let’s ballpark it by assuming that the typical subscription fee newsletters charge is $8 a month. That’s the default Substack sets the price at when you turn on premium subscriptions, so it’s probably a pretty good guess. And let’s also assume, to make the math easy and to paint a rosier picture for Substack, that all of those 3 million subscribers are paying monthly, instead of getting the annual discount.

Three million subscribers paying $8 a month works out to $288 million a year flowing into the Substack network. But the company itself pockets just 10% of that, because the other 90% goes to the writers generating those subscriptions. So Substack’s actual take is, by this estimate, $28 million dollars a year. Which is tiny. Even if $8 is lowballing the typical subscription fee (which I suspect it isn’t, because my strong guess is the typical fee is closer to $5 a month), by any reasonable measure Substack isn’t bringing in enough dollars to cover their costs.

With that in mind, go back to the discussion of portability. If you’re a big publisher on Substack, generating let’s say $10,000 a month in subscription fees, you’re taking home $9,000 and giving Substack $1,000. (You’re not actually taking home $9,000, because you’re also paying credit card processing fees to Stripe, but we can ignore that for purposes of the analysis here.) That’s $12,000 dollars a year you’re paying to host your newsletter on Substack. If you went to Beehiiv, and chose their most expensive plan, you’d be paying $1,200 a year.

What this means is that, the bigger your newsletter gets on Substack, the more reason you have to leave Substack, because you can save an extraordinary amount of money by doing so. And, as we discussed above, it is very easy to take the email list and paying subscribers you built on Substack and do just that. You’re not locked in, and the more money you earn for Substack, the more reason you have to stop using it.

To fix that and get their financials on a steadier path, Substack needs to make it harder for you to leave. They need to lock you in. Now, they’re not going to prevent you from downloading your email list, nor are they going to force you off of Stripe. (They might at some point, if things get bad enough, but their whole brand is built on not doing so, which makes it feel like a last resort move.) Instead, the way they build lock-in is by shifting the primary points of connection between you and your readers from open and portable email addresses to closed and proprietary relationships.

And, thus, “direct messages.” Thus the “followers” you get in Substack Notes. These are points of contact between you and your readers that aren’t portable. If you leave Substack, you’ll lose them. The hub of all of this is the Substack app, and a great deal of the feature announcements they’ve made lately are about getting more and more people to view the app as the primary way they read Substacks and interact with Substack authors. The more publishers rely on the app instead of email as the basis of their relationship with their readers, the harder it will be for them to move their newsletters off of Substack’s platform.

Direct messages aren’t about giving you a better way to connect with your readers. They’re about shifting your relationship with your readers into a closed system so you’re less likely to flee to platforms that don’t take one dollar out of every ten you earn.