Jump to content

Talk:Superrationality

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Aye guys, the telegram billionaire's case is really easy for superrationals: all of them just need to send one telegram at the same time - but at the very same time - exactly at that time when only one of them can succeed - deadline minus the time it takes to transfer and decode one of them. Transfer mechanism for telegrams is blocking all incoming reqs until the one accepted is finished, somewhat similar to CB radios. — Preceding unsigned comment added by Mightyknight (talkcontribs) 22:10, 16 November 2016 (UTC) Superrationality is for academic retards. --Preceding unsigned comment added by 75.71.107.42 (talk) 18:46, 25 December 2009 (UTC)[reply]

How do I know if someone is superrational?

[edit]

If I just magically know, then fine, Hofstadter is right, albeit about a trivial variation of the problem.

If, on the other hand, I ASSUME them to be superrational on the basis that they're "a logical thinker", there would seem to be real world applications, but the effectiveness of the strategy rests on its adoption by logical thinkers. And lets say I am, by all appearances, a logical thinker, but I defect in the prisoner's dilemma against a superrational opponent. Does that act alone NOT ONLY show that I'm NOT "a logical thinker", but belie an illogicality so fundamental to my nature that my opponent could somehow detect it in advance, and know that I wasn't a logical thinker, despite the fact that in all prior situations I appeared to be one?

The idea seems to fall apart. - Mr. Runtime error

You know they are superrational when they are religious.

Asymmetric games

[edit]

The article says "There is no agreed upon extension of the concept of superrationality to asymmetric games." What are the extensions of superrationality to mixed games, if any? - Connelly (talk) 02:13, 1 May 2009 (UTC)[reply]

In the literature, that is. - Connelly (talk) 02:21, 1 May 2009 (UTC)[reply]
I explained it outside the literature. It is religion. If you want a literature source, read a Bible.

proves itself wrong

[edit]

Interestringly the theory of superrational beings proves itself wrong by (using or depending on) absolute determinism or perfect universal knowledge.

Thats at least 4 paradoxa to solve first.

Let me bust the idea of a godlike rationalist by saying: "A superrational being can not think of a problem it can not solve."

What do you mean by "the theory of superrational beings," and how would absolute determinism disprove it? Factitious 06:35, 25 May 2006 (UTC)[reply]
I think that the original poster has not read the article. The article is not on "superrational beings." Superrationality is a strategy for prisoner's-dilemma-type games, not a theological doctrine or philosophy. By the way, the plural of "paradox" is "paradoxes." --67.125.30.218 17:54, 2 November 2006 (UTC)[reply]

Reference Issue

[edit]
Resolved
 – The tag is no longer present. --Chealer (talk) 00:55, 24 November 2012 (UTC)[reply]

The notion of superrationality has exactly one reliable reference--- it is an original idea of Douglas Hofstadter's. Hofstadter's articles in Scientific American as expanded in Metamagical Themas discuss the subject in excruciating detail, illustrated with stories and parables and so on. It seems ridiculous to require multiple sources. Why is the tag up there? Likebox 05:17, 4 September 2007 (UTC)[reply]

Money vs. Jail Sentences

[edit]

While superrationality is obviously logically consistent, it conflicts with the intuition of both economists and of bad people, mostly for different reasons. This means that it is important to be clear, so that the examples are as instructive as possible.

Money is easier to understand for most people than jail sentences, because it is hard to understand the utility value of "2 years hard labor" for most people, but it is easy to understand "$200".Likebox (talk) 20:22, 19 July 2008 (UTC)[reply]


Regardless of whether or not it uses money, the first example doesn't properly describe the prisoner's dilemma. In the example as written, it's always optimal strategy to defect because this maximizes the reward regardless of what the other player does. There's no maximum payoff for cooperation so superrational players wouldn't cooperate, it would effectively be a coin flip as described in the second example. In the prisoner's dilemma there is no optimal strategy, it's considered rational to defect because a player can accurately say, "No matter what Prisoner B does, I personally am better off defecting than cooperating." 76.24.24.170 (talk) 20:32, 14 September 2008 (UTC)[reply]

That's exactly the prisoner's dilemma--- the optimal strategy according to economic rationality is to defect for the exact reason you state. "No matter what Prisoner B does, I personally am better off defecting than cooperating". This logic defines what "rationality" means for an economist.
Superrationality is different. It's a new idea. It says, "I cannot make a decision without taking into account that the other superrational person is going to think the same way as I am". This means that if I am going to defect, then I know the other person will defect too, and this is a worse outcome for me. The idea is that there is a non-causal correlation between all superrational players, so that they all play as one.Likebox (talk) 22:03, 14 September 2008 (UTC)[reply]
But this doesn't change the outcome of the game. If the other person is going to defect, you still want to defect.radek (talk) 06:22, 12 December 2008 (UTC)[reply]
But if the other person wants to cooperate, and the other person is also superrational, I will want to cooperate. If the other person wants to cooperate because the other person is a foolish cooperating robot, then a superrational player will defect. The idea requires some insight into why the other person is cooperating.Likebox (talk) 20:41, 12 December 2008 (UTC)[reply]
The prisoner's dilemma has a wider range of punishments/rewards too. Something like 2 years vs 5years vs 7 or 10 years depending on the options.User:Guest —Preceding unsigned comment added by 75.108.253.225 (talk) 15:25, 29 April 2011 (UTC)[reply]

Bunk

[edit]

Basically superratinality assumes away the problem of the Prisoners' Dilemma and then shows that the problem doesn't exist. It also renders the whole thought experiment of the PD uninteresting. And regarding "First, it is assumed that the answer to a symmetric problem will be the same for all the superrational players. Thus the sameness is taken into account before knowing what the strategy will be. " - consider the following game. Player 1's choices are (L,R) and Player 2's choices are (U,D). The payoffs are: LU - 99,97 LD - 101,3 RU - 2,105 RD - 3,4 How does superrationality help here and in what sense is the game symmetric? But this is still PD. So does superrationality only apply in 'symmetric' PD's but not ones slightly non-symmetric? Also, the article needs more references and some kind of indication that this concept is at all taken seriously within scholarly fields that use game theory.radek (talk) 01:17, 13 December 2008 (UTC)[reply]

Superrationality is obviously not taken seriously in scholarly fields that use game theory. This is emphasized many times in this article--- "superrationality is distinct from standard game theoretic rationality", "only game theoretic rationality is a studied by mainstream economists". This is a completely non-mainstream school of thought in that sense, and it is labelled as such.
I saw your edits. It'd be good to find a source, other than Hofstadter, which states this.
By "this" what do you mean? Hofstadter didn't say "economists don't accept this", because he didn't care. I said that to make sure nobody confuses this with what economists call rationality.Likebox (talk) 15:10, 13 December 2008 (UTC)[reply]
By "this" I'm referring to statements such as the ones you put in quotation marks above. It would be good to find a source, even a non academic one which says this.
However, this approach to the prisoner's dilemma is not unique to Hofstadter. A similar approach was proposed by an earlier writer, who did not use the term "superrationality", whose work is discussed in the article on the prisoner's dilemma. I personally came up with it when I read about the PD, and I am sure that millions of others did too. It is a common response.
Know the writer's name? Should be included in the article and there'd be an additional source.
The name is David Gauthier, and no he should not be included because he is not mathematically precise, unlike Hofstadter.Likebox (talk) 15:10, 13 December 2008 (UTC)[reply]
He should still be included, even if he is not mathematically precise - that's not a necessary criteria. Lots of concepts were first stated intuitively and only got formalized later on.
The issue with references is this: this idea is not taken seriously in any academic field. My opinion is that this is because the people in those fields are being dumb, but that's just my opinion. So there's only one real reference, which is "Metamagical themas", but it's so in depth, and has been cited so many times. It's sort of like Darwinism in 1860. There's only one source, and not many people take it seriously. It's also like Polywater, which was bunk.
If Metamagical themas is cited often then some of these citations should make mention of superrationality and they could be added.
 "cited often" means outside academia. You can easily find a hundred citations by googling "superrationality". As far as I know, it is cited exactly zero times within academia.Likebox (talk) 15:10, 13 December 2008 (UTC)[reply]
Non academic sources as long as they're reliable and cautiously used should be fine. Things like book reviews and other writers referring to it. But not blogs or the like. It's actually harder to find these kind of references than you say.radek (talk) 01:11, 14 December 2008 (UTC)[reply]
The question of asymmetry is not adressed by Hofstadter, nor by anybody else that I know of, but it is most certainly not the reason that the concept has been ignored. I have my own answer to the question, which I consider 100% satisfactory, but I don't think you would care. My answer, like almost any other consideration of superrationality, is not published anywhere. This is why the article sticks to stuff that's verifiable from Hofstadter.
Again, it'd be nice to have a source which says that this is not why it's ignored or mentions this lack as a critique of sr. Surely there should be some reviews or something out there?
It's not necessary, since this information is not in the article. This stuff is not studied anywhere in academia.Likebox (talk) 15:10, 13 December 2008 (UTC)[reply]
Well, it seems to me like the fact that this concept (ignoring its other problems) isn't generalizable to non-symmetric games is probably at least a factor in why nobody pays attention - a concept that requires a strict symmetry in the payoff matrix (or in "names" of strategies - I'm not sure what exactly symmetry means here) is just not very useful.
Before you dismiss it as bunk, remember that the "rational" response to an N-times iterated prisoner's dilemma, the unique answer, is to defect N times. This is both completely wrong intuitively, and fares badly in evolutionary tournaments. A more correct answer, like tit-for-tat, is provably economically irrational, but winning in practice.
Only if you know N. I presume you're referring to Axelrod's tournament (actually that should have it's own article - Axelrod's Tournament) per your comments over at Prisoner's Dilemma. But the number of iterations that were going to be had in the tournament was not known or fixed before hand (when participants wrote and submitted their algorithms) partly because the number of rounds was going to depend on the actual number of responses given the round robin nature of the tournament. So in that setting tit-for-tat SHOULD win, per standard economic theory. As far as evolutionary games/tournaments with many players - well, that's a different game isn't?radek (talk)
I don't think you are right. Axelrod had each program play each other program 100 times (or something). He told the people that the programs were going to play each other, round robin, but the number of iterations was fixed. I don't know if he told the people how many iterations there were going to be beforehand. I agree it makes a difference in theory, but it would not have changed the results very much had he told everybody that each rounds was 100 steps.
I tried to look up the exact details of Axelrod's tournament and it's experimental design, since sometimes the devil is in the details (especially with game theory). Regardless, even with a fixed and known number of iterations this is a different game then the one that the 'always defect' in finite games result applies too. That result only works in games where everybody's rational. But as soon as you have some non-rational players around (or programs) it's not necessarily the optimal strategy - as Aumann in Cregtog8's quote notes. This is why we use evolutionary game theory to analyze these kinds of non-equilibrium path possibilities. But that doesn't make the standard analysis of the iterative PD "wrong" - the two explanations are complementary and they apply in different situations and the relevance of each is going to depend on the research question you're asking. For example, let d denote the share of programs who play 'always defect', p the share of those who play 'always cooperate' and 1-d-p be the share of 'tit4tat'. With the payoffs given in the article (100,100, etc) and 2 iterations in an Axelrod style tournament, 'tit4tat' will always dominate 'always cooperate'. But 'always cooperate' will dominate 'always defect' if d<(98/100)-p and 't4t' will dominate 'always defect' if d<(98/99)-(100/99)p. But this doesn't change the fact that in a finitely iterated PD between to "rational" players, the optimal strategy is to always defect.
But in fact all that's irrelevant. It's not like the results of Axelrod's tournament show that superrationality wins or that it is in some sense a better model of rationality. How would one even enter a 'superrational' concept into the tournament? Maybe there'd be some ways but at the very least the rules would disallow it. It's an empty concept that assumes away the problem.
In standard game theory, you would assume that the programs would all defect on the last round, to get the advantage. But then you conclude that the programs defect on the second to last round to gain an advantage, because they know the last round is defect, and so on, and so on, and defect all the way. Still, in this defecting environment, a renegade tit-for-tat, would not do too badly still if they find a friendly program to gain benefit with. In other words, for purposes of evolution, the fact that the number of steps is fixed makes very little difference (at least as long as the number of steps is large).
The case of asymmetry is difficult because it, in my opinion, is essentially the same as the question of religious ethics in the presence of human inequality.Likebox (talk) 05:20, 13 December 2008 (UTC)[reply]
I'm not sure what the above means. Above comments by radek (talk) 07:25, 13 December 2008 (UTC).[reply]
 I explain it below in detail. Also, it is not good style to intersperse comments like this. It makes the discussions hard to follow.Likebox (talk) 15:10, 13 December 2008 (UTC)[reply]
When there's a whole host of separate issues, this kind of format makes it much easier for me and it prevents the discussion from straying from the topic and changing of subject.radek (talk) 01:11, 14 December 2008 (UTC)[reply]
I'm poking slowly through papers. Lots of game theorists do think about various irrational/boundedly rational/magical thinking behavior, so while this kind of stuff remains nonstandard, it's not fringe. Unfortunately (for the article) there seem to be a fair number of papers which explore similar ideas, but don't refer in any depth to Hofstadter's superrationality. Even more unfortunately, some use "superrationality" in a different way--mostly informally. For instance, Aumann (ref is requested) says:
The paradox is resolved by noting that in a game situation, one man's irrationality requires another one's superrationality. YOU must be superrational in order to deal with MY irrationalities. Since this applies to all players, taking account of possible irrationalities leads to a kind of superrationality for all. To be superrational, one must leave the equilibrium path. Thus a more refined concept of rationality cannot feed on itself only; it can only be defined in the context of irrationality.
That makes me uncomfortable--I don't know if that difference is something which could be solved with a simple disambiguation line.
Anyway, hopefully I'll find something, or maybe there needs to be a more general article on "magical thinking in game theory" which includes superrationality as part of it. Meanwhile, I've restated the bit about it being fringe, but I'm not really comfortable with that without a source--is it from Hofstadter himself? (Tought to demonstrate a negative like that, which makes it tricky...) CRETOG8(t/c) 09:51, 13 December 2008 (UTC)[reply]
(BTW, when I say I'm "slowly through papers", I mean REALLY slowly. I don't want my saying that to slow anyone else down who's thinking of doing the same thing.) CRETOG8(t/c) 09:57, 13 December 2008 (UTC)[reply]


Well, I like Aumann's idea of superrationality a lot better and more applicable. I'm not sure he ever formalized the notion however. Let us know if you find anything and I'll try to look through some of that stuff as well.radek (talk) 01:11, 14 December 2008 (UTC)[reply]

(deindent) Ok-- some comments: While "superrationality" can mean something other than this particular form of superrationality, far-and-away the most well known meaning of that word is Hofstadter's. No disambiguation is needed, because no other use is notable enough, as far as I know. In the cited context was the idea that "superrationality" means "other thn rational" because it is capable of "dealing with another's irrational behavior". Notice that they automatically assume that rational is optimal. They do not suggest, as Hofstadter explicitly does, that standard economic rationality is completely wack and needs to be replaced.

Hofstadter, as far as I know, is the first person to explicitly challenge economic rationality, to call its bluff. He says "This is garbage", and gives a mathematically precise alternative. The use of the pejorative "magical thinking" to refer to superrationality is absurd. There is no magic involved. It is only referred to as magical thinking by those with an irrational attachment to game-theory rationality.

A lot of people have 'explicitly challanged economic rationality' - some more successfully than others. BTW, how precise is Hofstadter? Does he define 'symmetry' for example? And yes, this is an example of 'magical thinking' since it relies on the belief that somehow your own actions (or reasoning) can influence the simultaneously undertaken actions (or reasoning) of another person. It's a bit reminiscent of Newcomb's Paradox (which isn't a paradox either).radek (talk) 01:11, 14 December 2008 (UTC)[reply]
(interspersing comments makes the discussion hard to follow for others--- but I'll respond in this style) Read his book. It's precise enough to know exactly what he's saying, and it has a discussion of probabilistic scenerios, where the optimal probability is determined by the reasoning. It's written for a general audience, so you might not like it very much, but it's precise. You seem to know the literature much much better than me--- why not put in all the stuff you know? Maybe superrationality is an old idea.
But from your comments, I get the feeling that you don't appreciate the logic of superrationality: Hofstadter doesn't engage in magical thinking. He does not assume that his decision will influence another person's decision in any causal way. That's impossible. What he assumes is that the other person already is superrational, so that his decision is going to be perfectly correlated with Hofstadter's in a symmetric situation, and he assumes that both he and his opponent take this into account before maximizing their utility. This is not magical, because correlation does not imply causation. It's a circular definition of a decision algorithm that only looks magical if you already believe (in the religious sense) in economic rationality. Then any deviation from defection looks irrational, and you can't be persuaded otherwise because economic rationality is self-consistent.
But Hofstadter points out that superrationality is equally self-consistent, so that the prisoner's dilemma is fundamentally an ill-posed problem, with several consistent answers. This leaves him with a free choice of algorithm. He chooses superrationality because it seems to his intuition to be correct, and he urges his readers to do so too, with mixed results.Likebox (talk) 01:58, 14 December 2008 (UTC)[reply]

Since the everyday definition of rational is "beholden to reason" and "making sensible decisions which maximize utility", while the game theoretic definition of rational means "plays in a Nash equilibrium". To identify the two will confuse a nonexpert reader. To clarify, you should always say "Nash rational" or "Game theoretically rational" in contexts where a lay reader is reading, so they will not get confused and think that rational means "best" or "most sensible". This is not necessary within academia, where there is only accepted meaning of rational. But if you are talking to a non-academic, you must be clear.

Now that this is out of the way, I'll give you my personal definition of superrationality in a non-symmetric context. The way you do that is by defining a "religion".

A "religion" is an algorithm which decides games. Given the payoff matrix, and the nature of the players (meaning what algorithm they use, what religion they are) it tells you what to play. Each opponent's religion is important in deciding the value of the game. A superrational religion R is one where two opponents using R cooperate in a one-shot symmetric PD. But the religion can also tell you what to do in multi-player PD, and it can also tell you to defect in a symmetric situation.

This definition does not tell you how to construct a religion. The algorithm can be constructed in many ways, to maximize the payoff to its members, but the religion can also be thought of as having a "collective payoff", whose value reflects the utility of the community as a whole, this is the utility of a "god" (lower case g--- there are as many gods as there are religions). The utility of the god, and the relation of the individual to the god, defines the appropriate play in each situation. The religion of "utilitarianism" defines the utility of the god to be the sum total utility of all the players, and maximizes this quantity. It is an example of a superrational religion.

The religion "game-theory" finds Nash equilibria and plays them. This religion is perfectly individualistic. It does not require any collective utility at all to define the action of individuals. On the other hand, a superrational religion will require a collective utility, and if it is to reproduce the superrational answer, it should treat two members symmetrically in a situation of symmetric payoff.Likebox (talk) 15:43, 13 December 2008 (UTC)[reply]

Several game theory articles have referred to what they call "quasi-magical thinking", which means that players don't explicitly believe that their actions will affect the other players' actions, but they behave as if they do believe it. A bit peculiar, but I think tied to H's superrationality. On the other hand, I think I might need to read H firsthand before I really get it. CRETOG8(t/c) 02:01, 14 December 2008 (UTC)[reply]

Prisoner's Dilemma error

[edit]

The Prisoner's Dilemma shown on this page is not actually a Prisoner's Dilemma as described on that page, because the inequalities don't hold. Quote:

Canonical PD payoff matrix
Cooperate Defect
Cooperate R, R S, T
Defect T, S P, P

Where T stands for Temptation to defect, R for Reward for mutual cooperation, P for Punishment for mutual defection and S for Sucker's payoff. To be defined as prisoner's dilemma, the following inequalities must hold:

T > R > P > S

This condition ensures that the equilibrium outcome is defection, but that cooperation Pareto dominates equilibrium play.

One article or the other needs to be fixed.

WBTtheFROG (talk) 16:53, 5 February 2011 (UTC)[reply]


The inequalities are satisfied in the example.


Indeed, the numbers satisfied the inequalities as of March 3, 2024. Nonetheless, I chose to change the numbers slightly, for the following reason. In addition to the above inequalities, it is often required that (C,C) strictly maximizes social welfare. In the previous version, uniform randomization between (C,D) and (D,C) was as good as (C,C). -- Hkfscp11 (talk) — Preceding undated comment added 15:16, 3 March 2024 (UTC)[reply]

Super-rationality as social convention

[edit]

Super-rationality is just another way of stating that one or both parties holds a view not by observation or experiment, but by agreement. Rather than trusting to rational discovery alone, there is a convention or decision as to a particular sort of situation and how it ought to be handled. All cultures and societies use such conventional decisions to obtain more optimal results than would be indicated by rational discovery alone. In the Travelers' Dilemma (TD), a large group of persons might form a convention in which all parties agree to bid $100, and thereby obtain a shared optimum for the benefit of all players observing the convention. Similarly, the Prisoners' Dilemma (PD) is often resolved in real life by having all parties observe a convention of not defecting, and those who seem unlikely to observe the convention face exclusion or other dire consequences.

Conventions with mostly symbolic value to the group, but that are costly to the individual observing the convention are often used as a means to establish "proof" of loyalty to the convention, and to the group which holds that convention as a group value. Other more practical conventions, for which proof or evidence may be difficult to produce or establish, then can be assumed to be likely to be observed, as defection can be punished and cooperation can be rewarded, offsetting the benefits or costs of breaking or honoring the convention. Thus conventions allow individuals to obtain an optimum available only to a group, by becoming a member of that group, with the assurance that any member of the group not observing the convention will be punished by any, and perhaps all members. Defection becomes more costly, as the defection is not against a single competitor, but against a host of competitors. TheLastWordSword (talk) 13:42, 1 April 2013 (UTC)[reply]

No it isn't. Superrationality makes sense with a community of two people, without any enforcement and punishment, only as long as both people are superrational. The enforcement and such is for the real world, where there are always some idiots who aren't superrational, and so need to be punished, to make the superrational and rational equilibria coincide as closely as possible.74.73.177.13 (talk) 11:19, 5 June 2014 (UTC)[reply]

Removed section: Possible real world cases

[edit]

Removed:

Despite several attempts reported in his book Metamagical Themas, Hofstadter failed to obtain experimental results that would lend support to the claim that under specific circumstances human individuals do reason as described by the concept of superrationality. Proponents of superrationality argue that in a group of people with similar wishes and incomes, superrationality may explain the existence of:

  1. Charity, because while one person not contributing does not hurt the charity, everyone not contributing does.
  2. Voting in elections which are not close to even, because while one vote does not matter, the bloc of similar people does.
  3. The Mutual Assured Destruction strategy of nuclear deterrence during the Cold War, which made defection a much worse situation for each player even if it would provide the slight advantage of being the last to die.

In the first two cases, superrationality may be seen as an antidote to or opposite of the Bystander Effect, or more generally diffusion of responsibility.

On the other hand, game theorists believe that all of this behavior can be understood on purely rational grounds. People may give to charity because they have altruistic preferences; others may vote because they find value in exercising their civic duty; and the Soviet Union's and United States' disarmament could be explained by each fearing being obliterated by a retaliatory nuclear strike.

This has been labelled as an "Unreferenced section" for over two years (November 2012). It sounds like the musings of the editor or editors, rather than anything that has been written on the subject by any third party. -- Oliver P. (talk) 17:26, 8 February 2015 (UTC)[reply]

This talk page has some very witty, interesting, and funny comments, and I'd just like to say thank you to everyone who participated in my entertainment this evening, no irony, sarcasm, or facetiousness intended. — Preceding unsigned comment added by 137.118.203.102 (talk) 06:29, 26 April 2015 (UTC)[reply]

Comparison between superrationality and Kant's categorical imperative

[edit]

The comparison between superrationality and Immanuel Kant's categorical imperative was removed in October 2015 with the rationale "sources are a very brief review and a math paper, nothing from a philosopher". I disagree, and have reinstated the paragraph. To be quite honest, there aren't a whole lot of proper sources covering superrationality at all. Among the ones there are, more than one have claimed that superrationality is a form of Kant's imperative. Of course, I welcome a broader selection of sources discussing this issue. But in the state the article is currently in, the removal of these sources based on the fact that their authors are not philosophers is, in my mind, unwarranted. Gabbe (talk) 08:27, 24 October 2016 (UTC)[reply]

It's still just rationality

[edit]

If we are to speak of "superrationality," we cannot say that the key to it is assuming by A that side B is superrational. Side A must know from the beginning that B is at least as intelligent as A, and hence A may already reasonably assume that B will reason in the same way as A. However, this means that "superrationality" is just rationality with additional the condition that A and B have more knowledge about themselves. Then the use of the usual expected value is wrong, because the weights of both events depend on each other. I think iteratively can be shown that the weight of one event will be 1, and therefore the other 0. So this article did not show the concept of superationality, but only rationality under additional conditions. — Preceding unsigned comment added by 31.183.226.236 (talk) 02:55, 16 November 2019 (UTC)[reply]

Rated article

[edit]

I've rated this article as Start-class for WikiProject Game Theory, copying the quality rating from WikiProject Philosophy (which I agree with, since although the article is so long, much of it is suspected to be original research). As for importance, I think superrationality is a fairly important concept because it explains why people cooperate in the prisoner's dilemma, among other things, so it's at least High-importance - but I didn't rate it Top-importance because apparently this concept is not well accepted among game theorists. Please review; thanks. Duckmather (talk) 14:30, 28 April 2021 (UTC)[reply]

Naming

[edit]

I would like to take issue with the choice of name for this concept: "superrational" suggests that a player abiding by that principle is more rational than just a rational player, and "rational" is synonymous with "reasonable", in particular not making unrealistic assumptions. But in a prisoner's dilemma for instance it is sometimes unreasonable to assume the other player does not defect. It depends on other factors, often it is quite rational to assume the other will defect, in that case superrational is the opposite of rational. I would suggest renaming this concept as "symmetry-assuming", "reciprocating", "mutualistic", "mildly cooperating", "coordinating", or "collective": in effect each player assumes the other will do the same, thereby providing some sort of cooperation - improving both players' utility by tacit agreement. Also "not too selfish", "reasonably selfish", or "reasonably altruistic" as one selects a strategy which also benefits the other player, by symmetry, even though it is not totally selfishly optimal in the worst case possible according to the rules of the game; in a sense those strategies are midway between locally optimal -ie Nash- and globally optimal -ie socially optimal. One can also say "selfish and globally optimizing" to refer to that dual objective: maximizing one's utility but not to the detriment of the other's; one player is not willing to sacrifice himself, but is willing to play a strategy where both can win, with a modicum of effort. Cooperating is a dominant strategy in the repeated PD, but a given player has to threaten the other of monitoring and punishing defection to ensure cooperation, which is in a sense an effort. On can also call it "selfish but willing to cooperate": this is the first turn action in a tit-for-tat or relentless repeated game strategy. Thank you for your attention. Plm203 (talk) 13:08, 9 July 2024 (UTC)[reply]