Jump to content

Regret-free mechanism

From Wikipedia, the free encyclopedia
(Redirected from Regret-free truth-telling)

In mechanism design, a regret-free truth-telling mechanism (RFTT, or regret-free mechanism for short) is a mechanism in which each player who reveals his true private information does not feel regret after seeing the mechanism outcome. A regret-free mechanism incentivizes agents who want to avoid regret to report their preferences truthfully.

Regret-freeness is a relaxation of truthfulness: every truthful mechanism is regret-free, but there are regret-free mechanisms that are not truthful. As a result, regret-free mechanisms exist even in settings in which strong impossibility results prevent the existence of truthful mechanisms.

Formal definition

[edit]

There is a finite set X of potential outcomes. There is a set N of agents. Each agent i has a preference Pi over X.

A mechanism or rule is a function f that gets as input the agents' preferences P1,...,Pn, and returns as output an outcome from X.

The agents' preferences are their private information; therefore, each agent can either report his true preference, or report some false preference.

It is assumed that, once an agent observes the outcome of the mechanism, he feels regret if his report is a dominated strategy "in hindsight". That is: given all possible preferences of other agents, which are compatible with the observed outcome, there is an alternative report that would have given him the same or a better outcome.

A regret-free truth-telling mechanism[1] is a mechanism in which an agent who reports his truthful preferences never feels regret.

In matching

[edit]

Fernandez[2] studies RFTT in two-sided matching. He shows that:

  • In a one-to-one matching market, the Gale–Shapley (GS) algorithm is RFTT for both sides, regardless of which side is proposing. Moreover, GS is the unique RFTT mechanism within the class of quantile-stable matching mechanisms. Outside that class, there are other RFTT mechanisms, but they are not natural. In particular, the Boston mechanism and top trading cycles are both not RFTT. Moreover, in a GS market, truth-telling is the unique report that guarantees no regret.
    • For example, suppose there are two women: Alice and Batya, and two men: Chen and Dan. The women preferences are Alice: Dan>Chen, Batya:Chen>Dan. The men preferences are Dan:Batya>None>Alice, Chen:Alice>Batya. Suppose the men are proposing. Then, with truthful reports, the matching is Dan-Batya, Chen-Alice. If Alice truncates her preferences and reports only Dan, then the matching is Batya-Chen, and Alice remains unemployed. In this case, Alice regrets not being honest, since being honest would guarantee that she is employed.
  • In a many-to-one matching market (such as hospitals-doctors matching), the doctor-proposing GS is RFTT for both sides, but the hospital-proposing GS is not RFTT. This supports the decision of NRMP to switch from hospital-proposing to doctor-proposing GS.

Chen and Moller[3] study school choice mechanisms. They focus on the efficiency-adjusted deferred-acceptance rule (EADA or EDA).[4] It is known that EDA is not strategyproof for the students; Chen and Moller show that EDA is RFTT. They also show that no efficient matching rule that weakly Pareto-dominates a stable matching rule is RFTT.

In voting

[edit]

Arribillaga, Bonifacio and Fernandez[1] study RFTT voting rules. They show that:

  • When a voting rule depends only on the top alternative of each agent (e.g. plurality voting), RFTT is equivalent to strategyproofness. This means that, for 3 or more outcomes, the only RFTT mechanisms are dictatorships (by the Gibbard–Satterthwaite impossibility theorem); and for 2 outcomes, a mechanism is RFTT if and only if it is an extended majority rule.
    • As an example, to see that plurality voting is not RFTT for 3 outcomes, suppose an agent's preference ranking is z>y>x. If he sees that x is elected, then in hindsight, voting y is a dominant strategy: it can never hurt (as x is already the worst outcome), and it can help (if y and x were tied).
  • For egalitarian voting rules: all neutral variants (i.e., breaking ties by a fixed order on agents) are RFTT. The anonymous variants (breaking ties by a fixed order on candidates) are RFTT iff there are at least m-1 voters, or the number of voters divides m-1.
  • For the veto voting rule (a scoring rule where all candidates receive 1 point except the least-preferred one who gets 0), the results are similar to the egalitarian rules. Similarly, k-approval is RFTT.
  • Other scoring rules may not be RFTT. In particular, Borda voting, plurality voting and Dowdall voting, and all efficient anonymous rules, are not RFTT.
  • All Condorcet-consistent voting rules that also satisfy a weak monotonicity condition are not RFTT. This condition holds, in particular, for the rules of Simpson, Copeland, Young, Dodgson, Fishburn and Black (in both anonymous and neutral versions). Successive elimination rules are also not RFTT.

In fair division

[edit]

Tamuz, Vardi and Ziani[5] study regret in fair cake-cutting. They study a repeated game variant of cut-and-choose. In standard cut-and-choose, a risk-averse cutter would cut to two pieces equal in his eyes. But in their setting, there is a different cutter each day, playing cut-and-choose with the same chooser. Each cutter knows all past choices of the chooser, and can potentially exploit this information in order to make a cut that will guarantee to him more than half of the cake. Their goal is to design non-exploitable protocols - protocols in which the cutter can never know what piece the chooser is going to choose, and therefore always cuts the cake into two pieces equal in his eyes. The idea is to restrict the positions in which the cutter can cut; such protocols are called forced-cut protocols. A simple non-exploitable forced-cut protocol is: in each day, take all pieces generated in the previous day (by forced and non-forced cuts), and force the cutter to cut each of these pieces into two. This protocol uses 2n cuts, where n is the number of days. There are protocols that use fewer cuts, depending on the number of dimensions of the cake:

  • If the cake is an n-dimensional convex set, then there is an envy-free forced-cut protocol that uses n cuts (one cut per day). Each day, the cutter is forced to make a cut in a different dimension, orthogonal to all previous cuts, so the information from previous days is not helpful.
  • If the cake is 1-dimensional, then:
    • There is an adaptive envy-free forced-cut protocol that uses 3 cuts per day. Each day, there is one adaptive forced-cut, and the cutter must make two additional cuts (one on each side of the forced cut).
    • There is no non-adaptive forced-cut protocol that uses any fixed number of cuts per day;
    • There is a non-adaptive forced-cut protocol that uses O(n2) cuts, where n is the number of days. At each day, there are n forced cuts at 1/(n+1),...,n/(n+1); the cutter must make n+1 cuts (one in each interval).
  • If the cake is a 2-dimensional set (e.g. a square), then there is a non-adaptive forced-cut protocol using 3 cuts per day. On each day t, there is a vertical forced cut at t/(n+1). The cutter must make a horizontal cut at each side of the vertical cut.

Cresto and Tajer[6] also study regret in fair cake-cutting among two agents, where the regret comes from a change in preferences: after one player sees the choice of the other player, his preferences may change. They suggest a variant of cut and choose that avoids this kind of regret.

References

[edit]
  1. ^ a b Pablo Arribillaga, R.; Bonifacio, Agustín G.; Marcelo Ariel Fernandez (2022). "Regret-free truth-telling voting rules". arXiv:2208.13853 [econ.TH].
  2. ^ Fernandez, Marcelo Ariel (2020-07-31). "Deferred acceptance and regret-free truth-telling". Economics Working Paper Archive.
  3. ^ Chen, Yiqiu; Möller, Markus (2021). "Regret-Free Truth-telling in School Choice with Consent". SSRN Electronic Journal. doi:10.2139/ssrn.3896306. ISSN 1556-5068. S2CID 236911018.
  4. ^ academic.oup.com https://academic.oup.com/qje/article-abstract/125/3/1297/1903670. Retrieved 2024-01-15. {{cite web}}: Missing or empty |title= (help)
  5. ^ Tamuz, Omer; Vardi, Shai; Ziani, Juba (2018-04-25). "Non-Exploitable Protocols for Repeated Cake Cutting". Proceedings of the AAAI Conference on Artificial Intelligence. 32 (1). doi:10.1609/aaai.v32i1.11472. ISSN 2374-3468.
  6. ^ Cresto, Eleonora; Tajer, Diego (2022-05-01). "Fair cake-cutting for imitative agents". Social Choice and Welfare. 58 (4): 801–833. doi:10.1007/s00355-021-01375-2. ISSN 1432-217X. S2CID 244276548.