When Reputation Systems Are Worse Than Useless

A paper by Ely, Fudenberg, Levine, titled When is Reputation is Bad?, analyzes mathematical models of situations where public reputations make it harder, not easier, to sustain good behavior. I’ll start with their example of a car mechanic who prefers to be honest but will occasionally be tempted to take an unfriendly action in order not to be mistaken in the long run for a crooked mechanic. Then I try to summarize their findings about the class of situations that lead to this kind of problem.

Suppose that a car mechanic can recommend either a tuneup or a new engine, and that half the cars that come to her need a tuneup, half a new engine. Customers prefer to have the correct repair done (even though new engines are expensive). For any particular car, a good car mechanic gains greater utility from being honest, but might be tempted to do otherwise because of long-run reputation effects, as we’ll see. A bad mechanic has no morals and likes the extra revenue from engine replacements, so always recommends engine replacement. There are both good and bad mechanics out there, and customers know mechanics only from their reputation history, which is just the sequence of Tuneup/Replacement actions they took in response to previous customers.

Customers start with some initial beliefs about how likely it is that a mechanic is good. If that belief is high enough, a first customer will try the mechanic, and the game is underway. Even one tuneup in a mechanic’s history will convince customers that the mechanic is good (bad mechanics always replace the engine; the same phenomenon could occur, I think, but would be more complicated if bad mechanics occasionally disguised themselves– look for a future post about a paper by Cripps, Mailath, and Samuelson that gives some insights into that).

Suppose a mechanic has a string of engine replacements, with no tuneups. Each additional engine replacement makes customers more suspicious that the mechanic is bad (though it’s always possible that it’s a good mechanic who just happened to get a lot of cars that all needed new engines). Eventually, after some number K of engine replacements, customers are so suspicious that they stop going to that mechanic and the game is over.

Now consider what the good mechanic should do if she happens to get K cars in a row that all need new engines. On the Kth one, she knows that being honest will cause her to be mistaken for a bad mechanic and she’ll get no future business, so she’s tempted to recommend a tuneup even though she thinks it needs a new engine. But customers, knowing that even a good mechanic will not be honest the next time, after K-1 engine replacements, will not bring their cars to the mechanic in that situation. By an unraveling argument familiar in game theoretic analysis, that means that the good mechanic will not be honest on her K-1st car if she’s had all engine replacements up till then, and so on all the way back to the very first car. Thus, customers can’t trust even the good mechanics to be honest, even on the first car, and no one uses the mechanics at all.

The moral of the story is that the public reputation system is creating the wrong incentives. The usual incentive effect for a reputation system is to cause a strategic player to do something that helps other people, in order to be “confused” with the type of player who really do like to help other people. Here, it’s creating an incentive for a strategic player to do something that hurts other people, in order not to be confused with the type of player who really prefers those harmful actions.

The paper summarizes (p.7) the conditions that can lead to this kind of problem:

  1. Information about a player is revealed only when other players are willing to engage with that player, so that getting a sufficiently bad reputation is a black hole that you can’t escape from.
  2. There are “friendly” actions; a high probability of friendly actions is what causes partners to we willing to play. (In the mechanics example, honesty is the friendly action.)
  3. There are bad “signals” or outcomes that occur more frequently with unfriendly actions but occur sometimes even with friendly actions. It is these signals/outcomes that will be made publicly visible in a reputation system. (In the mechanics example, the bad outcome is recommending an engine replacement.)
  4. There are “temptations”, unfriendly actions that reduce the probability of bad “signals” and increase the probability of all the good signals. (In the mechanics example, the temptation is reporting the signal “tuneup” even when the car needs an engine replacement.)
  5. The proportion of player types who are committed to the friendly action regardless of its consequences is not too large. (These would be mechanics who would never say “tuneup” when you needed an “engine”, even if it meant closing their business tomorrow.

Note that these conditions can be met for the mechanics situation even if the public signals that are shared reflect whether the engine really did need to replaced. For example (see p.30), suppose that the good mechanics get an imperfect reading of whether a car needs a tuneup or an engine replacement. But after they try one or the other, the truth is revealed and goes into their publicly visible reputation, along with the action they chose. In this case, a “bad signal” is when the mechanic turns out to be wrong in her recommendation [Note added after initial post– the bad signal is really being wrong in a recommendation of “engine”– see followup comments]. The friendly action of making an honest recommendation can still lead to a bad signal, though a bad mechanic who always recommends an engine tuneup will still get a bad signal more frequently. Recommending a “tuneup” is still a temptation, to avoid being confused with the bad mechanics.

An earlier draft had a useful discussion of why not all “advice” processes will fit the citeria listed above, though I don’t find it in the current draft. Perhaps most important is criterion 1, that getting a bad reputation is something that you can’t escape from. If a player can pay a fee to encourage customers to continue interacting with her, or if there are some customers who don’t pay attention to reputations, or if there’s some way to keep generating public signals without having any customers take a risk on you, then there can be an escape from the black hole, and thus the unraveling argument won’t come into play (the temptation option is not so compelling just before your reputation is about to enter the black hole). In other situations, condition 4 may not apply: there may not be a temptation action available that reduces the probability of all the bad signals while increasing the probability of all the good signals.

Advertisements

About Paul Resnick

Professor, University of Michigan School of Information Personal home page
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s