good and bad rationalization

ethics for assholes, part 4

To rationalize is human. It’s inevitable, given that we like to understand ourselves and have very limited information. Rationalizing isn’t some unique mental act. It’s exactly the same thing we do when we notice the bumper sticker on the car driving erratically and think, “No wonder! Pittsburgh driver.” What we know about the driver is limited to their kind of car, what’s on the bumper sticker, and maybe whatever the outline of their head reveals. The best explanation of why they’re driving erratically probably has nothing to do with any of those. (“Typical Honda driver!” “Typical ball-cap-wearing driver!”) But that’s the information we have, so we use that limited information to “explain” what they’re doing.

I’m not sure if this is a digression or an illustration, but this is, in a nutshell, why it feels wrong when my mom told me about the “Indian doctor” she liked so much. She thinks she’s only describing him—and she is! he’s actually from India—but it’s kind of like saying “the car with the Steelers bumper sticker”: it’s not obvious how the bumper sticker matters. And because the information is so limited, it’s only useful as part of an explanation if you connect it to a stereotype of people with Steelers bumper stickers, or Indian doctors. The stereotype might be true and might even be explanatory in this particular case: maybe this person is a bad driver because they’re from Pittsburgh, where they learned how to drive, and where the drivers really are bad, explaining why they ended up as a bad driver. But the they might also be driving erratically because a spider just dropped down into their lap, and they’re deathly afraid of spiders. That could be true regardless of whether they’re also a bad driver, also from Pittsburgh, and whether the drivers there are also bad. (Similarly, all kinds of things might be true of the doctor from India in virtue of his being from India, but are those going to be the best explanation of whatever we’re about do discuss about this doctor?) The point is that I’m coming up with the best explanation I can given the limited information I have, so I will almost certainly have to rely on a stereotype to turn it into an explanation.

We explain our own actions in the same way: we have limited information and a strong desire to explain what we do. For explanations of our own actins, though, it’s more complicated because we also think of ourselves as pretty good people. So it’s like explaining the erratic driving in front of us when we’re also a Steelers fan: “They’re probably celebrating! Good people.” (We can also rationalize when we have low self-esteem, in which case our bias isn’t that we’re a good person, but that we’re not a good person. Probably less of a bias, but also dangerous for other reasons: believing we’re good people without evidence might be a great way to make it through tough times.)

We’re rationalizing when we rely on the scraps of information we have to explain why we did what we did, usually in light of our knowledge that we’re pretty good. It’s rationalizing because there are other, better explanations of our actions. Of course, we don’t know that we’re rationalizing because we usually stop trying to come up with explanations when we get to one that feels right, and the one where we’re a good person usually feels right. So we don’t keep looking for better explanations.

What would the alternative to rationalizing look like? We could stop trying to understand ourselves at all, or we could try to understand ourselves entirely as an outside observer like a psychologist would understand us. I have some doubts about whether we could even believe that kind of explanation about ourselves—the psychotherapist might convince me that I’m attracted to people who remind me of my mother, but can I really accept that explanation while still being attracted to the person? It’s an extreme example, but it illustrates the problem with accepting “outside” explanations of your actions. Even if we can accept these kids of explanations, are they better? Psychological explanations are better than neurological ones, which are better than astrological ones? There are answers to these questions, but distinguishing better and worse explanations is a challenge.

I’ve written before about how rationalization does us some good, and one of my thoughts there was that a benefit of rationalization is that rationalized explanations are still superior to no explanations of our actions. The benefit is that giving any explanation, even a rationalized one (assuming it’s sincere), opens the person to criticism in two ways. First, you can criticize the person’s motives, which you can only do if there’s some explanation of why the person did something. Second, and more important, is that explanations make it easier to assess if a person is consistent in their reasoning about why they do things.

That’s really valuable because consistency reasoning is, I believe, the main way in which we make any kind of personal moral progress—at least once we’re old enough to start reflecting on our own actions. In the same way that we have a hard time leaving our own actions unexplained, we also have a hard time leaving alone an inconsistency in our motivations. We can do it for a while, but if we’re forced to keep looking at the inconsistency, we’ll feel pressure to resolve it. If I do something to help a friend at work and don’t do something to help a different friend in a different work context, I can ignore it, but if I’m forced to acknowledge that my actions look inconsistent, then I start looking for how differences in the cases justify why I treated them differently. Rationalizations at least put pressure on me to think about consistency in my actions because I’ve at least bothered to come up with an explanation of my own action. That seems like a benefit of rationalization.

Is it enough of a benefit that it’s always going to lead to good? Of course not. I could start with bad explanations, reconcile those explanations using bad reasoning, and end up committing myself to bad principles that I never would have committed to if I hadn’t rationalized in the first place. For example, I tell myself I helped one work friend because he’d previously done me a favor. Now, it’s a bad explanation because, in fact, I’d repaid that favor many times over already; and, if I owed anyone anything, it was the other friend, whose favors for me I always manage to forget because I take him for granted. So my explanations of my action are bad, and now I’m explaining the inconsistency by thinking that what I care about is repaying favors. So now I start to understand myself as someone who cares about loyalty and tit-for-tat repayment. Loyalty is fine in limited doses, but it’s also the preferred underlying principle of villains, as far as I can tell: whatever is morally defensible about loyalty is easily swamped by the favoritism and bias it covers over for. So my rationalized bad explanations have led me down a path to believing that I care about loyalty, which I might use in the future when deciding what I should do, leading me to do even worse things than I would have done if I’d never bothered to reflect about what I treated these two people differently.

So rationalization isn’t an unqualified good. It’s an open question if people are more likely to rationalize in a way that leads them to be worse or better. I think the pressures are, on balance, for the good, since the kinds of justifications we come up with will also tend to push us towards being good. But not always: loyalty is a good example of something that does as much harm as good, and maybe more harm. (To be more nuanced, any principle like be loyal is justifiable within a broader system of principles: that’s a topic for later.) Whatever good comes from rationalization is most likely to happen with some clear-eyed guidance about what is best together with some self-knowledge. So I think the real question about whether rationalization is good or bad will rely on the kind of self-knowledge we have, and are obligated to have. And that prompts the question of whether and why we’re obligated to know ourselves at all.