Go To Search
Click to Home
Auditing Fast and Slow
By Brian Evans


This article applies concepts from behavioral economics to auditing. Behavioral economics is a relatively new area of economic study that seeks to understand the cognitive processes that drive economic decisions. Using empirical research, it challenges the conventional assumption that humans act rationally by showing that other factors contribute to our decision-making more than traditional economic models assume. These cognitive processes affect each of us in our daily lives whether we realize it or not. They also influence our professional lives.

So you may be thinking, what do audits have to do with decision-making? In performance audits, we often talk about improving outcomes. My hypothesis is that performance audits can help an organization achieve better outcomes by improving decision-making. Audits do this by providing independent and objective information (findings). Audit recommendations can help decision-makers prioritize actions that will have the greatest chance of improving outcomes.

The inspiration for this article comes from a book called Thinking Fast and Slow. The author, Daniel Kahneman, is a Nobel Prize winning psychologist who has spent decades studying judgment, decision-making, and happiness. In the book, Kahneman describes two systems our brains use to help us process information. System 1 is fast, instinctive and emotional. You can think of it as all the things our minds do when we are forced to react to something that happens to us. System 2 is slower, more deliberative and more logical. You can think of it as all the things our minds need to overcome the potential pitfalls of overreacting or underreacting.

One of Kahneman’s main points is that we need both systems to survive, but that each system has the potential to lead us to poor judgment unless we acknowledge and account for the weaknesses embedded in each. When we don’t, there is a greater risk of introducing bias into decision-making.


Kahneman’s book, and others, like Subliminal and The Black Swan, are filled with examples of studies that show how our decision-making can go wrong when we rely too much on system 1. For example, a 2005 study found that people tend to unconsciously eat larger amounts of popcorn, regardless of its quality, if they receive a larger container of it. If you think about it, it’s pretty remarkable that our eating habits depend on something as random (or strategic) as the container in which it’s received.

In another study, test subjects reacted differently to computerized voices depending on whether they sounded masculine or feminine, with subjects showing profound but unconscious gender biases. These types of studies are eye openers for me because they call into question our ability to understand what is driving our decisions. Our unconscious minds seem to have more influence on our actions than we realize.


The differences between system 1 and system 2 have some similarities to the dynamics between auditees and auditors. In my conceptualization, auditees (i.e. decision-makers/managers) are more like system 1, and auditors are more like system 2. To have balance between the potential weaknesses of each group, an organization needs both.

This isn’t to say management is always reactive and emotional or that auditors are always slow and indecisive. The point is that our roles in an organization, and the incentives embedded in those roles, seem to align pretty well with system 1 and system 2. So, if auditors are more like an organization’s system 2 and auditees are more like system 1, it may become clearer how each group could benefit from each other. It also may be easier to determine how we can communicate with each other.

Some of the examples in this article won’t be new to you, but thinking about them in the context of cognitive processes may be. Understanding why certain audit techniques seem to work well, while others don’t, creates the potential for innovative and adaptive ways to apply them. Subtle changes could have a big impact on recommendation implementation.

System 1 influences us by making shortcuts, that may be efficient (quick), but not effective. One of the observations from Thinking Fast and Slow is that our perception of stand-alone objects is not very good, so our brains typically identify objects as the difference from an anchor point. From our earliest days of being alive, our vision develops based on identifying variations. The research cited in the book shows that our brains do this as a quicker way to identify things. We don’t analyze all the traits of the object, we evaluate it against something else and note how it is different.

One of the ways this plays out in the real world is that people’s satisfaction with their salary is based more on how it compares to others, rather than the actual amount. For example, if everyone I know makes $40,000 and I make $50,000, I will be happier than if everyone I know makes $60,000. My income is the same in both cases, but my perception of my income changes. Our perception of positive and negative is based on the distance from some anchor point, not the actual salary amount.

Another observation is that we don’t aggregate information well and so we put more weight on actual or potential losses, even if the net effect of the changes is positive. The example cited in the book is an investor in the stock market that made several trades. The overall effect of the trades was increased earnings. But, some of the trades resulted in losses. The trader becomes concerned with the losses even though they gained overall.

This example suggests that even when there is positive news to report, the audience may feel a sense of loss. Most audit reports aren’t filled with good news, so it’s understandable that auditees will feel some sense of loss. Those feelings can make it more difficult to understand the audit’s overall message. To help focus attention on the key messages, it may be better to focus on the overall trend, not the individual observations.

Another example, called “the problem with turkeys” comes from The Black Swan. Here’s how it’s described in the book:

"Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race 'looking out for its best interests.' On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief."

The example is used to show that the significance of an event is based in part on the observer. For the turkey farmer the Wednesday before Thanksgiving is significant for entirely different reasons than it is for the turkey. For the turkey, the event is catastrophic. The author’s conclusion is that the objective of risk management should be to avoid being the turkey. He doesn’t mean we should avoid the farmer, but that we should build systems to test assumptions so that surprises don’t become catastrophic.

These examples have lessons that can be applied not only to our own thinking, but the audit process as well. Regardless of where they are applied I think they can help us be innovative and adaptive when addressing risks that may go unnoticed if we, or our organizations, come to rely too much on system 1.



Given the availability of information and complexity of today’s word, it is reasonable to conclude that decision makers are under more pressure now than they have been in the past. That may seem counterintuitive when you think about having the ability to research any question from your phone in almost any place in the world. But, the availability of information means there are more voices and more considerations when trying to make a decision.

Although each person will react to these conditions differently, I think it’s likely that overwhelmed decision makers will lean on system 1 more heavily, which could increase the chances of under-informed decisions. To balance this out, they need to be reminded of the value system 2 brings. At the organizational level, the audit function can serve as the system 2, when it can provide information that is relevant to decision makers.

One of the things that has surprised me most during my 10 years in performance auditing is how well maturity models have been received in audit reports. My assumption was that they would be viewed as condescending or imprecise. But, they seem to resonate with auditees. I think this is because they provide a way to simplify what is needed and show a path toward gradual improvement. Maturity models seem to engage system 2 in a way that helps auditees see audits as a way to create a learning environment, rather than a punitive environment where there are only winners and losers.

So if we are asking decision makers to work on developing their system 2, or relying more on the audit function to serve that purpose, is there anything we should work on to add value? My guess is that auditors in general, and as a function within our organizations, are most comfortable using system 2, but we may want to spend some time thinking about how we can support system 1 both for ourselves and for the organizations we serve.

One way to do that is to think about the demands decision makers are under and consider them in our audits. These aren’t new ideas, but I think they take on even more significance when thinking about the cognitive processes that impact decision-making. Here I’m thinking about:

  • Simplifying complex ideas and being direct in audit language.
  • Being comprehensive, but recognizing the need to prioritize efforts to address recommendations and break them out into manageable pieces.
  • Thinking about timing throughout the audit process. That could mean starting audits so that results will be known before decisions are made, or setting a deadline and working backward to scope the audit to meet that deadline.

These approaches may not be possible in all situations or organizations, and they may have drawbacks, but so does an audit that comes out after decisions were made.


  1. Thinking, Fast and Slow, Daniel Kahneman, 2011
  2. Subliminal: How Your Unconscious Mind Rules Your Behavior, Leonard Mlodinow, 2012
  3. Blind Spots: Why We Fail to Do What's Right and What to Do about It, Max Bazerman and Ann Tenbrunsel, 2012
  4. Nudge: Improving Decisions about Health, Wealth, and Happiness, Richard Thaler and Cass Sunstein, 2008
  5. The Black Swan: The Impact of the Highly Improbable, Nassim Taleb, 2007


Brian Evans is the elected auditor at Metro, the regional government that serves the Portland metropolitan area. Prior to his election as the Metro Auditor, Brian was a Principal Management Auditor at Metro and the Senior Economist with Oregon’s Economic and Community Development Department. Earlier in his career, he served as an AmeriCorps member working on microfinance programs at Mercy Corps. Brian holds a Master's degree in Public Affairs and is a Certified Internal Auditor and Certified Government Auditing Professional. He serves on the Executive Committee of the Pacific Northwest Intergovernmental Audit Forum (PNIAF) and on the Education Committee of the Association of Local Government Auditors (ALGA). He has presented at several training events organized by PNIAF and ALGA.