ethics for assholes, part 3
why ethics can't be too hard
I said before that being moral and being competent are related. They’re not the same, but being competent means that you reason well and pay attention to the things that matter, which is also what happens when you’re ethical. A particular theory of ethics like utilitarianism or Kantianism would tell you more than this because those theories are not about reasoning generally but about which things to care about or focus on once you’re engaged in ethical reasoning: happiness, freedom, eudaemonia. But reasoning more generally isn’t assessed by whether you care about the right things. And reasoning alone also won’t get you all the way to being ethical: “treat likes alike” only gets you so far, which is why Aristotle and even Kant—who clearly values reasoning more than just about anyone—ends up taking more substantial ethical positions than just saying “reason well.” Kant took those positions by fetishizing rationality (that was kind of the fashion: if fetishizing rationality isn’t the definition of the Enlightenment, I don’t know what is), so “reason clearly” also becomes “respect reasoners’ rationality.” And then it seems like rationality becomes something more than reasoning. Anyway, Kant also thinks women can’t reason well, so I don’t know how helpful it is to go down the path of talking about what Kant gets wrong.
Here, though, I want to think about why ethics can’t be that hard. This is why I started by talking about reasoning. If ethics were unrelated to reasoning—if ethics were instead something like obeying the rules, whatever they were—then ethics could be hard, even impossible; or trivially easy; or anything in between.
Imagine the extreme version of the view that ethics has nothing to do with our reasoning practically about what we ought to do. Instead, imagine that ethics is what god or the universe commands us to do, or what is objectively best in some way. Even if you think you hold this position, you probably also assume you have some rough idea of what the universe thinks morality is. But why should you have any idea at all? The universe is unimaginably huge, and we’re unimaginably not. Thinking ethics is somehow true in the universe and we just so happen to understand it reminds me a little of how people will say that god is unfathomable, but then they actually picture god more or less like a really great human.1
If you think that morality is objective in the sense that it’s baked into the nature of the universe, you have to take seriously that it might be entirely unrecognizable to us as anything ethical at all. Maybe the point of morality is to maximize purple in the universe. Murder is wrong only to the extent that it reduces how much purple there is, and is sometimes even mandatory!
Ok. That’s obviously absurd. How could the universe care about purple? First of all, purple is something that (most) humans see in the presence of certain wavelengths of light and certain background conditions (I don’t know enough vision science to know what goes into seeing purple; also, I’m colorblind, so I can’t even introspect about it, for whatever that would be worth). So isn’t purple a human-centered idea? Yeah, maybe. Maybe that’s a problem for the position that the universe cares about maximizing purple, and maybe it’s not. That’s my point: if you really think that right and wrong is baked into the universe, you have to explain how we have any idea of what it is. And it doesn’t take much understanding of what the universe is like and what we’re like in comparison to the universe to realize that the idea that the universe “cares” or “thinks” anything at all is so far what we can understand that our guesses are just that.
Of course, it could still be true that there is an objective sense of right and wrong somewhere in the universe. But now I feel a little like my reaction if I try to take brain-in-a-vat scenarios seriously enough. Maybe I really am a brain in a vat, or—even more skeptically—“I” am thinking happening sequentially and unified in some way with some sense of interaction with information. (I mean, if you’re going to be skeptical of the external world, why think that there are brains?) If I really take those scenarios seriously, though, I don’t conclude that I don’t know whether there’s a table in front or me or whether I’m typing on my keyboard (that makes me so happy to type on it now that I have switches in it that I like). No, instead I conclude that whatever “there is a table in front of me” means is whatever this is here and now in front of me.
If you want to convince me that there’s no table in front of me, a brain-in-a-vat scenario isn’t going to do it, or the view that there are no physical objects because there is only information and information processing. Instead, to convince me that there’s not a table in front of me, you’d have to convince me that I’ve been drugged, or that this is a stage prop half-table, or even that I’m using the word “table” incorrectly.2 I might be wrong about what the entire universe is like, but when I talk about “tables,” I’m talking about something within my understanding of a universe. I don’t know if this is the most common response to spending a lot of time thinking about skeptical scenarios, but I am pretty sure that no one interacts with the world as if they’re living in a simulation. They still set their coffee down on the table, no matter what background ideas they have about coffee and tables.
Similarly, with ethics, if I really take seriously that the universe has its own moral code or sense of right and wrong, then I realize that there’s no reason to think that code has anything to do with what I think of as right and wrong. But, more importantly, what the universe happens to “think” about morality doesn’t give me any reason to change what I think of as morally right and wrong, even if I somehow suspected that the universe cared more about purple than about murder. That doesn’t make murder any less wrong than being a brain in a vat means that this isn’t a table. Whatever I mean by “table” is limited by how I understand what this universe is, and whatever “right” and “wrong” are is limited by how I understand how humans are supposed to interact.
That doesn’t answer any questions about what “right” and “wrong” refer to, and I can still be way off on what things I think are right or wrong. Lots of people think it’s fine to eat animals that are raised on factory farms, or at least think it’s fine not to think about where their meat comes from. I don’t think those people are necessarily bad, and I don’t automatically doubt their other moral judgments, but I do think they’re mistaken on this one. Or maybe I’m the one who’s wrong, and the suffering of animals isn’t morally significant. My point is that at least one of those positions is wrong, and I can think one of us is wrong, even seriously wrong, without also thinking that none of us have any idea of what morality is all about.
What I do assume, though, is that we can figure out what’s right and wrong. Maybe I can’t figure it all out, certainly not on my own, and maybe I’m got some serious blindspots that will keep me from ever understanding what my moral obligations are. I can accept that. But I can’t accept that I’m so entirely misguided that I have no idea what I’m even trying to do with morality, that someday we’ll discover that murder is ok and the best thing that humans can do is coat the world in purple.
Here’s an important distinction, though: I’m not saying that it will never be true that humans ought to coat the world in purple. Things might change, and who knows what purple might do? (Maybe we’ll someday be visited by the people from Prince’s homeworld who graciously and sexily offer that, in exchange for hosting Prince, they’ll let us continue as an unenslaved species, but only if we can coat the world in enough purple to demonstrate that we regret letting him die when he clearly had more albums left to record.) But I don’t see how something like maximizing purple is morally obligatory now and we’re not noticing it. If this is a moral blindspot, it’s so serious and widespread that I have to think that I don’t understand morality at all, don’t even understand what morality is at all. And if I don’t understand morality that fundamentally, then I’m back to thinking that, even if I don’t understand what morality really is and I don’t know what the word “wrong” really means, I still understand that murder is wrong and purple doesn’t have to be maximized (morally, at least).
The reason that morality is bound to what we think it is is that morality is action guiding, or at least tries to be, whatever else it is. To say that something is right or wrong isn’t a neutral description but is, in some way, to say something about how a person should act: do something, avoid it, maximize it, take it into account, or something else—but not be indifferent to it when acting. However we ended up with morality—evolved, pure reasoning, both, handed down on Sinai, implanted by Prince’s alien race—morality is used to guide actions and talk about how our actions should be guided. So we can’t, as a group, be so far off about it that we can’t even understand how and why why something supposedly morally right or wrong should guide our actions.
That’s why moral standards can’t be too high. Because morality is deeply intwined in our action guidance, moral standards can’t be so high that they aren’t realistically action guiding. “Don’t murder”: clearly action guiding and not hard to stick to. “Sell everything you have and work for the poor”: might be too high, or might not be. It depends a lot on the person and the society they’re in. For example, we can’t all follow that rule, since there’s no one to buy our stuff if we’re all selling to help the poor. But figuring out what particular moral rules to keep is moving from a question about whether morality can impose standards that are too high in the abstract to what standards are, in practice, too high. And that’s going to move me into questions about what it is to reason morally and why consistency reasoning moves us towards perfection, and when to think that consistency reasoning has stopped being moral reasoning and has instead become something else. That’s a lot, though, and it’s for another day.
That’s not a criticism, actually. Thinking of god as a really great human actually makes more sense to me, given the role of religious belief in people’s lives, than genuinely surrendering any idea of what god is. If you abandon any idea that you can understand god, the difference between a theist and an atheist is linguistic, and whether you would say you believe in god then probably says more about the people you talk to about god than about your beliefs. ↩
One of the best students I ever had, at UCLA, pushed the idea that a skeptical scenario would need to take seriously that I don’t understand the words I’m using. When I say “this is a lectern in front of me,” we focus on the sensory perception of the lectern, but there’s not a categorical difference between what I see and what I think “lectern” means, and not only because of how I came to learn the word “lectern.” I think it’s an underappreciated aspect of standard skeptical discussions, or at least those discussions as I always had them. ↩