Four Bad Arguments Against AI

Many of the most common critiques of LLM technology fail under scrutiny.

This week, OpenAI (the company behind ChatGPT), and Google (the company behind, well, everything) unveiled a bunch of new and upgraded tools powered by AI. Some of it was pretty neat, some of it pretty exciting—and all of it, at least on the social media platforms where I hang out, came under withering criticism from people convinced this technology is fundamentally evil, or obviously a dud.

There’s a lot of hype around LLMs (the technology powering the current wave of AI). There are also legitimate concerns about the effect AI might have on various industries. That said, the way a lot of people who don’t much like this tech express their dislike—the arguments they use to demonstrate its evil-ness or its dud-ness—frequently aren’t very good. And they frequently aren’t very good in a particular way that’s worth highlight. Not just because doing so might help us to better sort the good arguments from the bad, but also because the way they go wrong is common of a lot of arguments, about a lot of topics.

Namely, the four arguments listed below (which are, in my experience, the four most common criticisms of LLM technology), all run into troubles with consistency. They all say “AI is bad because of X,” but then don’t look at what else X might apply to. And because, in each case, X applies in other situations where the person making the argument doesn’t assert the badness of those other things (and, in fact, likely would reject similar arguments about those other things’ badness), we should be suspicious that the person isn’t actually convinced that AI is bad because of these arguments, but rather has already decided AI is bad for some other, unstated reason, and is now just hunting for readily available, and plausible-sounding, arguments to rationalize that judgement.

1) “AI will take jobs.” It is true that tools powered by LLMs will make some jobs obsolete, reduce the demand for others, and make some people’s work less remunerative. But the same has been true of every technological advancement with an economic impact. Electric lights put lamplighters out of business. Industrial farming equipment made it harder to earn a living harvesting crops. Automated factories reduced the demand for factory labor. Digital typesetting enabled by personal computers destroyed the physical typesetting industry. Etc. Etc. And with each of these cases, two things are true. First, while some industries and some workers did suffer an economic hit (sometimes a crippling one) from those changes, we didn’t see a longterm, permanent increase in unemployment, or decrease in wealth. The economy adapted, and new jobs were created to replace the old. Second, basically no one today would want to take back the introduction and widespread use of those new technologies. We acknowledge that they made the world better. So, yes, AI will take (some) jobs. But if you’re going to use that as an argument against it, you need to explain why you aren’t also calling for the abolition of the personal computer, or the industrial bread baking machine.

Why AI is the New Sliced BreadArtists who fight AI may need to look in the mirrorwww.aaronrosspowell.com/p/ai-new-sliced-bread
Why AI is the New Sliced Bread

2) “AI makes mistakes.” LLMs have a problem with hallucination. If you as ChatGPT to teach you about a subject, there’s a decent chance some of the facts it gives you won’t be quite right. The AI models have gotten a lot better about this over time (GPT-4 makes fewer such mistakes than GPT-3), but there’s reason to believe, given how LLMs work, that they’ll hit an upper limit on accuracy, and that they’ll always be prone to getting at least some things occasionally wrong. What this means is that if you’re a student using AI as a tutor to teach you about the Great Depression, you can’t blindly trust everything it says. But you know who else you can’t blindly trust? Your high school history teacher. Or the article about the Great Depression you found on Google. Yes, AI makes mistakes, but so do human sources of information. (One of the things you quickly learn when you spend a career in DC public policy, for example, is that “authoritative” writers about public policy often get basic facts and details wrong, but state those falsehoods in quite convincing ways.) So, if you’re going to argue that we shouldn’t use this technology because the information it provides isn’t perfect, you need a story about why you also aren’t calling for the abolition of high school history teachers or mainstream newspaper columnists. Or you need to show that it is so much worse than the alternatives that it’s not worth using at all.

3) “AI depends upon unattributed work.” At some point, courts (and eventually the Supreme Court) will decide whether the way contemporary LLMs are “trained” violates intellectual property laws or whether it amounts to fair use. Writers and artists are convinced the answer is obvious. I’m not. The way AI models are trained is that they look at a whole lot of content by browsing the internet, and then use it to build up what amount to models of probability. They don’t copy those drawings and novels into their database, but instead use them to figure out what drawings and novels look like, and then use those statistical results to predict what a painting of a cat or a typical fantasy RPG scenario might be. Perhaps that’s stealing, because the artists and writers who “inspire” the output of LLMs don’t get credit, and don’t get paid. But here’s the thing: What I just described is precisely what human writers and artists do, too. We look at art we like, or read novels we enjoy, and then turn our hand to creating new stuff inspired by the old. If ChatGPT is stealing IP, then so were every punk rock band that sounded like the Ramones. If Google’s Gemini is plagiarizing, then so are all those Tolkien-inspired novels that fill out the genre section at your local bookstore. Everything is a remix, and if everything is a remix, then you need an argument for why remixing is only bad when it’s a computer doing it.

No, Midjourney Is Not Stealing Your WorkThe argument that AI image generators steal from artists would mean that all artists are thieves. www.aaronrosspowell.com/p/no-ai-is-not-stealing-your-work
No, Midjourney Is Not Stealing Your Work

4) “AI isn’t useful.” While the utility of this technology is probably overhyped, AI isn’t blockchain. We were all told Bitcoin and its various analogues would revolutionize the world, and that “put it on the blockchain” was the solution to basically every problem in economics, governance, and society. That turned out to not remotely be true, and most people never found any real use for cryptocurrencies beyond financial speculation—and financial grift. This led to a narrative about “tech bro” hyped technologies, and that narrative’s getting turned against AI. But it doesn’t work, for a pretty simple reason. Unlike blockchain, AI, even in its early state, is already useful. 75% of knowledge workers are using AI in their work. Students are using it to learn. I use it nearly every day to help me find relevant dialogues in thousands of pages of translated early Buddhist philosophical texts. Even if innovation stopped today, AI would be a hugely helpful tool in many applications. Is it perfect? No. But if your argument against it is that the tool doesn’t have perfect utility, you’re going to have to explain why you aren’t similarly taking a stand against every other imperfect tool we (and you) routinely use.