Being Morally Responsive: The Correct Mindset

In this first article on the art of being morally responsive, I’d like to focus on what happens between your ears. Every policy tool (e.g. gender-based analysis, risk management) has a set of check-lists, matrixes, and processes that need to be used. These are both important and insufficient; a policy tool also has an associated mindset that practitioners need to understand and apply. Without the right way of thinking, the tool won’t work.

For example, in a cost-benefit analysis, it’s best to think exclusively in financial terms by attempting to put a dollar value on everything. This includes assets and liabilities that are easily appraised (like the cost of purchasing materials or financial savings), but also intangibles when possible. In extreme applications of this mindset, economists have even estimated the value of a human life using statistical methods. Although unpalatable for many people, this estimate can be a useful way to compare the costs and benefits of a particular policy. As a hypothetical, imagine that building a traffic barrier would cost $10 million and would be expected to save one life. If a human life is “worth” $11 million, then this is a sound investment (ignoring discount rates). If it is only worth $9 million, then it’s not, and the money should be spent elsewhere, even if we know that someone will die as a result. This is a cold, calculating, and sometimes disconcerting way to make decisions, one that we would expect robots to use. But it has to be this way. Without translating everything into the same “units” (e.g. dollars), comparisons are difficult and cost-benefit analysis can’t be done very effectively.

This mindset was most famously applied (to most people’s horror) by the Ford Company in selling the Pinto, a story that was popularized in Fight Club. Although some of the details remain shrouded in mystery, it appears that the Ford Company knew before releasing the car that the Pinto was at risk of leaking gas and lighting on fire following low-speed collisions. However, Ford determined that the cost of refitting the fleet would be higher than the eventual payouts to the expected victims of burn-related injury or death. Consequently, they sold the car without any adjustment.

The Ford risk managers approached their dangerous product with the “cost-benefit mindset”, seeking to maximize financial gain. This can be a useful mindset when applied properly, such as in cases of purchasing a new machine at a factory, but it can also be misapplied. To any readers outside of government, worry not: the Canadian government does not have a dollar figure that it applies to human life. But policy could be developed in this way, if so chosen.

Add the cost of the sacrificial puppies… Carry the one…

Although every good policy has undergone some form of cost-benefit analysis, it is clearly insufficient alone. It can help determine if a policy is cost-effective, but it can also lead to perverse conclusions unless other tools (and the associated mindsets) are used in concert. Consequently, effective policy analysts are familiar with several different tools and mindsets, and they can switch between them quickly, looking at a policy from one angle and then from another. Only then can we be confident that a policy is well thought-out.

Values analysis is one of the tools that every analyst should use. Like cost-benefit analysis, it has its own mindset that must be applied for the tool to work. Three of the key principles of this mindset are:

1. Accurately and impartially assess honestly held values.

Values analysis isn’t about judging; it’s about understanding and describing. That can be difficult, because analysts are people too. We carry our own values with us, and they can often serve as a barrier to a clear-eyed assessment of other people’s beliefs. When we think about problems that elicit strong moral reactions, it is often easy to see our perspective as the only possible one. It can appear like anyone with a functioning moral compass simply has to feel this way, in part because our own moral reactions feel so self-evident.

Of course, that’s not true. Well-meaning and psychologically typical people have different moral reactions. For example, it might seem obvious to left-leaning people that stricter gun control in the United States is the only morally defensible stance, because it would almost certainly save lives. Who could argue against saving lives?

Well, plenty of people, because our moral compasses aren’t designed solely to reduce suffering. We also value personal liberty, and gun control can be interpreted as infringing on the freedom of individuals to do what they wish. Or, in a country like the United States, gun control could be interpreted as an attack on foundational national principles, such as the right to bear arms. Others may view broad-based restrictions as unfair, because hundreds of thousands of American safely own guns without killing anyone. These people shouldn’t be punished for the sins of others.

Without a values analysis, analysts can easy overlook these other moral interpretations, believing them to be “fringe beliefs” only felt by a small minority of sociopaths. They’re not though, so analysts need to make active efforts the downplay their own moral views and not allow them to colour their analysis. This is hard, but it’s possible with effort.

2. Don’t seek ulterior motives for moral stances. Most people don’t lie for strategic gain.

When someone is spouting off ideas that you find morally reprehensible, it might be tempting to believe that they aren’t serious. They’re being strategic. It’s all an act to hide an ulterior agenda behind moral pandering. Since no one could possibly believe those things, these people must somehow benefit, usually economically. So, when wealthy people oppose higher taxes on moral grounds, it’s only ostensibly because they believe it’s unfair for the government to forcibly requisition a greater proportion of their rightfully earned incomes, a view sometimes expressed by the super-rich. Rather, it’s actually because rich people want to retain their power and influence. It’s in their economic interest to oppose tax hikes, and the moral arguments are nothing more than a smokescreen.

A similar type of argument was pushed by a group of academics in response to the flag-waving at the Freedom Convoy protests in Ottawa. These academics suggested that it was all a ruse. The demonstrators didn’t truly believe that their cause was patriotic; they cynically employed the flag to ensure that the police wouldn’t intervene and make themselves look bad on TV (among other tactical advantages).

Although this explanation is possible, these academics may be too clever for their own good. It’s far more likely that the protesters brandished the Canadian flag because they were patriotic and believed their cause was too. That’s it. How do I know? I defer to Occam’s Razer (i.e. the simplest explanation is usually correct). What’s simpler?

  1. people are triangulating their comments to push an ulterior agenda, using moral language to advance a long-term strategy, or
  2. people basically say what they believe.

The answer is obvious. Clever people, who usually stalk the opinion section of major newspapers, can build entire careers by “seeing through” moral stances to the “real” agenda, which usually connect back to economic interests or power relations. Sometimes, this approach is useful because people genuinely do pursue this strategy. However, it is antithetical to the values-analysis mindset. Values analysis is about what people believe, not how they benefit. And people are usually honest about their beliefs.

3. When faced with several possible explanations for beliefs, choose to analyze the most socially acceptable ones first.

There are often several possible moral explanations for people’s views, and analysts need to decide which deserve further exploration and which are not worth the time. As an example, let’s look back to affirmative action in the education system. Whether it is right for advanced schools and universities to give preferential treatment to disadvantaged racial minorities is an important and controversial moral question. Here are two reasons why a member of a privileged group might oppose affirmative action:

  1. They believe affirmative action is unfair, because people from privileged groups who work hard and achieve excellent grades can be locked out of certain opportunities due to their race, in part, or;
  2. They’re bigoted and don’t want racial minorities sharing space with their children.

Well, both of these could explain why someone would oppose affirmative action, and both justifications could be true for different people. It’s also possible to conduct values analysis on both propositions. If the first is accurate, values analysis would focus on the proportionality expression of the fairness/cheating value and seek to find ways to make gifted education more accessible for everyone. If the second is true, then values analysis would instead focus on the loyalty/betrayal value, which has morally justified racism to people who see their racial group as an important “in-group” worthy of protection (a view that I, obviously, do not share). Which explanation should the public servant focus on?

If the values-analysis mindset is properly applied, then focus should be directed towards exploring the more socially acceptable explanation (in this case, the first one). This doesn’t mean that the other explanations shouldn’t be discussed. Rather, this is a question of prioritization. We have limited time and resources available to research and analyse policy problems, and it is more important that public servants devote resources to addressing the best moral arguments against a policy proposal rather than the worst. In this particular example, the government would likely gain from systematically accounting for fairness considerations, but it is unlikely to benefit significantly from re-evaluating a policy because it upsets racists.

To be clear, racism is an important issue that should not be ignored by policy-makers. As well, values analysis is useful in exploring some of the less attractive views that people hold. But when we’re applying the values analysis mindset, it is more useful to prioritize socially acceptable arguments, because this approach brings several benefits:

First, socially acceptable arguments are more likely to drive public opinion. Most people’s moral compasses are well attuned to the values of broader society, so focusing on more acceptable explanations is more likely to reflect the views a larger subset of the population.

Second, the resulting analysis will be stronger. The government doesn’t necessarily need to adapt their policymaking to the lowest possible moral reaction. Do racists exist? Yes, but explicitly racist outrage is unlikely to be particularly persuasive or present a meaningful public challenge to a policy, so it isn’t pressing to account for these views in policy development. In contrast, concerns about fairness drive a large group of people to oppose affirmative action, so it’s better to aim high and respond to more acceptable values.

Third, focusing on more socially acceptable explanations is the best way to check biases. If values analysis suggests that only racists would oppose affirmative action on moral grounds, then it has completely missed the point. It has reduced other people’s views into strawmen that can easily be discounted as the ideas of a radical fringe, rather than explored the intricacies of differing moral compasses. Since the underlying analysis will be based on a caricature of real beliefs, the resulting policy isn’t going to be any better. It’s going to be exactly what the government planned on doing in the first place.

In order to illustrate this point, imagine that a government intended to implement a policy the promotes affirmative action in schools. Due to constraints on time and resources, the proposal contained a values analysis that only suggested that racists would oppose the policy because they’re… well… racist. How would we expect the government to react? Do we expect them to change the design of the policy to more align with the views of racists? Probably not. In the case, the values analysis will only serve to validate the policy that was already chosen. If only bigots could be expected to oppose, then it’s probably a great policy!

Unfortunately, this analysis is just wrong. There are plenty of non-racists who oppose affirmative action policies on moral grounds. These people are not going to react kindly to the policy, and decision-makers were not warned about the blowback. In other words, the tool didn’t work very well.

Alternatively, imagine that the proposal goes for decision, but the values analysis instead focuses on the fairness/cheating value with regards to affirmative action. It points out that many people believe that test scores and other meritocratic criteria should be the only way of regulating admissions to better schools, because, to these people, it rewards excellence in a racially blind fashion. Of course, the government doesn’t need to agree with this moral stance, but it presents a far more difficult (and constructive) challenge to the policy proposal. By analysing this viewpoint, it’s possible to develop a better policy design and/or communications strategy.

In short, focusing on unacceptable moral views tends to construct strawmen, so it is unlikely to improve outcomes. In contrast, ensuring that a policy is responsive to the most acceptable views is a far tougher test, which would likely lead to superior decision-making. It’s obvious which is the better approach.

Conclusion

To be effective, all policy tools require analysts to think in particular ways. Fundamentally, the values analysis mindset is optimistic, recognizing that well-meaning people can have different values and many of them have something important to say. It requires confidence in people’s honesty and openness to alternative worldviews, even those the analyst personally finds reprehensible. In conducting values analysis, we should seek to find the most socially acceptable reasons to justify a moral stance, not the least. This is obviously not the only mindset analysts need to create good policy, but it is a useful one.

Leave a Reply

Your email address will not be published. Required fields are marked *