3.2 Theme 2: Proposing novel designs for reputation mechanisms

Most of the work in this theme comes from researchers in computer science and multi-agent systems. The general consensus was that there are some very interesting ideas that need to be more rigorously validated by analyzing them in an economic framework. This is an area where there are great opportunities for collaboration between computer scientists and economists, even though researchers from both fields acknowledged the existence of a “language barrier” (which, we are confident, can be overcome through more sustained contact between researchers of the two disciplines.)

Mechanisms for eliciting truthful feedback. In most online systems feedback submission is voluntary. In the absence of concrete incentives, online community members may thus refrain from providing feedback or provide intentionally or unintentionally untruthful feedback. A number of researchers are working towards developing mechanisms that provide strict incentives to online community members to both participate (i.e. provide feedback) as well as truthfully report their observations [15, 31].

Implicit extraction of reputation. An important theme of this workshop was the use of data mining techniques that can automatically extract reputational information from publicly available networked data structures, such as the Web, Usenet groups, etc [7, 13, 26]. An impressive amount of information about someone’s social standing, past behavior, and interaction habits can be inferred in this way. Such implicit reputation mechanisms are an intriguing complement to mechanisms that rely on explicit feedback. They can be particularly useful in terms of bootstrapping feedback mechanisms (i.e. substitute for feedback during the initial phase when feedback is scarce) as well as in situations where feedback is unreliable or subject to strategic manipulation.

Distributed feedback mechanisms. Most commercial feedback mechanisms are based on centralized architectures. That is, feedback is solicited, and stored in a single repository, controlled by a single organization (e.g. eBay, Epinions, Amazon). Motivated by issues of privacy, trust, and scalability, some researchers are beginning to look at distributed feedback mechanism architectures [8, 13]. In such systems, agents receive reputational information from a variety of sources, including direct experience, feedback from third parties, and implicitly extracted information. An important challenge is to develop algorithms through which these sources of information are combined in a “sensible” way to adjust the agent’s beliefs. A further challenge relates to modeling the effectiveness of such complex systems and comparing it to that of centralized feedback systems. Finally, these systems may need to be resilient to the presence of “strategic” agents who attempt to influence the calculations for their own benefit or even “malicious” agents who merely attempt to render the system ineffective.


About Paul Resnick

Professor, University of Michigan School of Information Personal home page
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s