Jump to content

Crowdsourced psychological science

From Wikipedia, the free encyclopedia

Crowdsourced science (not to be confused with citizen science, a subtype of crowdsourced science) refers to collaborative contributions of a large group of people to the different steps of the research process in science. In psychology, the nature and scope of the collaborations can vary in their application and in the benefits it offers.

What is crowdsourcing?

[edit]

A complement to the traditional way of doing science

[edit]

Crowdsourcing is a collaborative sourcing model in which a large and diverse number of people or organizations can contribute to a common goal or project.

First examples of crowdsourcing science can be found during the 19th century. A Yale University professor launched a call for open participation of citizens to maximize the number and diversity of observations he could get on the Leonids of 1833 meteor storm phenomenon.[1]

Crowdsourced science has been set aside for a long time and has only recently gained popularity in science. Actually, it helps overcoming several limits of the classic model on which science has been conducted up till now. For example, in psychology, scientific research has often been limited by small sample sizes and a lack of diversity in studied populations.[2] These limits can be tackled with a more collaborative approach of scientific research (i.e., crowdsourced science).[3]

A two-dimensional concept

[edit]

Crowdsourcing initiatives can be described along two different continuous axes.[4] The first dimension represents the degree of communication between project members, ranging from very few collaborators and little communication, to a large crowd of collaborators with a rich amount of communication. The second dimension corresponds to the degree of inclusiveness, varying from only selecting groups of people with a high expertise in the field of interest, to open to anyone—with or without expertise—interested in collaborating. This twofold distinction draws four types of crowdsourced science projects. However, these two dimensions lie on a continuum: A single project is neither entirely inclusive (vs. selective) nor entirely high (vs. low) in communication.[4]

Why crowdsource psychological science?

[edit]

Limits of the traditional vertical model

[edit]

In psychology, the traditional way of doing research follows a vertical model.[4] In other words, most of the time, research is carried out by small teams from a specific lab or university, and organized around one or two main researchers (often referred to as Principal Investigator, or PI). This small team contributes to all different stages of the research project: conception of the research question, design of the study, data collection, data analysis, discussion of the results, and publication of the manuscript.[4]

However, vertical science has its own limitations that can impede the progress of science. Conducting science within small independent teams makes it difficult to conduct large-scale projects (with large samples, large databases and high statistical power), which may limit the scope of research. In traditional science, researchers have access to less resources, data, and methodologies.[4] Small, independent team projects are also limited in the feedback they can get from other perspectives.[4]

Nowadays, research teams do not necessarily have to make compromises anymore (e.g., small sample size, same stimuli, same contexts and no replication),[4] because much of these limits could be tackled by a more crowdsourced research.

The replication crisis

[edit]

In the scientific method, replicability is a fundamental criterion to qualify a theory as scientific. The replication crisis (or credibility crisis) is a methodological crisis in science that researchers began to acknowledge around the 2010s. The controversy revolves around the lack of reproducibility of many scientific findings, including those in psychology (e.g., among 100 studies, less than 50% of the findings were replicated).[5][6]

Some of its main underlying causes are referred to as “questionable research practices”.[7] These include p-hacking (i.e., exploiting researcher degrees of freedom until a significant result is obtained), HARKing (i.e., hypothesizing after results are known),[8] publication bias (i.e., the tendency among scholarly journals to only publish significant results),[9] statistical reporting errors,[10][11] and low statistical power often due to small sample sizes.[12][6]

Among these various issues, small sample sizes and the lack of diversity within samples can be addressed through crowdsourced science—increasing the generalizability of findings and therefore their replicability as well. Indeed, samples in psychology often rely on college students[13] and Western, Educated, Industrialized, Rich and Democratic (WEIRD) populations.[14][15][16]

For these reasons, the replication crisis has contributed to the rise of crowdsourced, large-scale projects, especially replication projects held at an international scale like the Many Labs project,[17][18][19][20][21] and the Psychological Science Accelerator (PSA).[3] These crowdsourced projects aim at solving some issues raised by the replication crisis, more specifically by assessing the replicability of studies and generalisation of the results to other populations and contexts.

Ambitions of the horizontal model

[edit]

In contrast to the vertical model of doing research, the horizontal model—an inherent principle of crowdsourced science—mainly relies on variations in terms of inclusiveness and communication. Its core principle is about the non-authority of one or two researchers in terms of resources, ownership and expertise.[22] Following this principle, the different tasks within a project are distributed among many researchers. The whole project is then supported by a team, which ensures the coordination of the contributors.[4]

A perfect horizontal model does not really exist because vertical and horizontal models are more conceptualized as extremes of a same continuum.[4] This distributed model of science work is gaining popularity in the scientific community. In about 40 years and across different scientific disciplines, research teams have roughly doubled in size (from 2 to 4 people on average).[23]

By encouraging larger crowds to contribute to research projects in psychology, the horizontal model aims at reducing some limits of the vertical one. It has three distinct ambitions:[4]

  • Carry out wide-ranging works
  • Encourage a democratized psychological science
  • Establish robust findings

Large-scale research projects

[edit]

The first ambition of the horizontal model is to enable researchers to conduct more large-scale research projects (i.e., ambitious projects that cannot be conducted by small teams).[4] By aggregating various skills and resources, it is possible to move from a model where research is defined by available means, to a model where research itself defines the necessary means to answer the research question.[4]

Democratizing psychological science

[edit]

The second ambition of the horizontal model is to compensate for inequalities (e.g., in terms of recognition, status, and success in a researcher's career). Psychology and more generally social sciences show a strong bias of what is called the Matthew effect[24] (i.e., academic advantages going to those who are already the most renowned).

Early career researchers from less renowned institutions, less economically developed countries or underrepresented demographic groups are generally less likely to have access to high-profile projects.[25] Crowdsourcing enables such researchers to contribute to impactful projects and gain recognition for their work.[4]

Robust findings

[edit]

The third ambition of the horizontal model is to improve the robustness, generalizability, and reliability of findings in order to increase the credibility of psychological science.[4][26] Horizontal collaboration between research teams facilitates the replication of studies[27][17][28] and makes it easier to detect biased effect sizes, p-hacking, and publication bias—different problems raised by the replication crisis[29] (see also #The replication crisis).

Crowdsourcing in practice

[edit]

Contributions at different stages of research

[edit]

This section aims at detailing how crowdsourcing can contribute to the different stages of the research process, from the generation of research ideas to the publication of the outcomes.

Ideation

[edit]

Ideation is the first step of any research project. In psychology, it refers to the process of defining the general idea behind a project—purpose, research question, and hypotheses. This step can be done in collaboration between several researchers to scan a broader spectrum of ideas and select those of broadest interest and impact.[4] Faced with a research question, the different collaborators can bring their expertise in the construction of hypotheses.

Crowds can also help to generate new ideas to solve complex-problems, such as illustrated by the Polymath project.[30][31][32][33]

Assembling resources

[edit]

When assembling resources, crowdsourcing can be useful, especially for labs with less resources at their disposal.[4] Online platforms such as Science Exchange and Studyswap allow researchers to establish new communication lines and share resources between labs. Matching resources from labs across the globe minimizes waste and facilitates research teams’ ability to meet their goals.[4]

Sharing tasks across labs can improve the efficiency of a research project, especially highly time-consuming ones. In biology, for example, studying the entire genome takes a lot of time. By distributing its investigation and combining resources across multiple labs, it is possible to accelerate the research process.[34]

Study design

[edit]

There can be many ways to design a study. Research teams across the world neither have the same theoretical background, nor are they all equipped with the same materials.[4]

Crowdsourcing can be useful in the case of conceptual replications (i.e., testing a same research question through different operationalizations).[35][36] When testing a same research question, variations in study designs can lead to strong variations in effect size estimations.[28] Diversifying the methods used to test the same hypothesis across different populations—through collaborative projects—enables better estimations of the true consistency of a scientific claim.[4]

Data collection

[edit]

In psychology, data collection often relies on samples drawn from Western, Educated, Industrialized, Rich and Democratic (WEIRD) populations, which impedes the overall generalizability of findings.[15][16] Crowdsourcing the data collection process by relying on multi-lab data collection and online crowdsourcing platforms (e.g., Amazon Mechanical Turk, Prolific[37]) makes it easier to reach a wider audience of participants from different cultural backgrounds and non-WEIRD populations.[4] When the research question makes it possible to rely on internet samples, it is also an easy way to recruit larger samples of participants with minimal financial input and within short amounts of time.[38][39][40]

Most of the time, members of the general public are recruited to undergo studies as research participants but they can also be recruited to collect data and observations.[4]

Data analysis

[edit]

In research, data analysis refers to the process of cleaning, transforming, and modeling data using statistical tools, often with the purpose of answering a research question. Within a research project, this is typically done by a single analyst (or team) and results in a single analysis of a dataset.

Analytic strategies can differ greatly from one team to another.[4][41] For example, a study found that among 241 published articles on fMRI, 223 different analytic strategies were used.[42][43] Moreover, there can be many ways to test a single hypothesis from the same dataset. Although defensible, decisions in data analysis remain subjective, which can greatly affect research results.[41] A way to counter this subjectivity is transparency. When data, analytic plans, and analytic decisions are made transparent and open access to the rest of the community, it facilitates criticism and gives the opportunity to explore alternative ways to analyse data.[41]

“Crowdsourcing the analysis of the data reveals the extent to which research conclusions are contingent on the defensible yet subjective decision made by different analysts.”—Uhlmann et al., 2019[4]

Writing research reports

[edit]

A research report is a document reporting the findings of a research project. The overall quality of a research report can benefit from crowdsourcing practices, especially during the writing process.[44] Aggregating a large number of contributors increases the range of expertise and perspectives,[44] which contributes to build more solid arguments.[4] It also makes proofreading easier (e.g., catching grammatical errors, weird phrasing, typos, biases, factual errors, claim-checking).[4][44]

To facilitate collaborative writing, some researchers have suggested guidelines for writing manuscripts with many authors.[44] There should always be a leading author (or a few, but with individual responsibilities) who takes care of managing the writing process and takes explicit responsibility for any mistake—avoiding diffusion of responsibility in case of errors.[44] It is also recommended that the leading author(s) follow the four general principles mentioned below:[44]

  • Care in crediting the coauthors team
  • Clear and frequent mass communication
  • Make sure materials associated to the manuscript are well-organized
  • Early and deliberate decision-making

Peer review

[edit]

Before getting published in an academic journal, submitted papers undergo peer review (i.e., the process of having an author's academic work reviewed by experts from the same field). Typically, it is performed by a limited number of selected reviewers. Crowdsourcing the peer review process increases chances of getting reviews from a larger number of experts in the relevant domain.[4] This is also a way to significantly increase opportunities for better criticism and faster fact-checking before an article gets published.[45]

Crowdsourced peer-review can be accomplished alongside open access peer-review[46] (e.g., through centralized platforms dedicated to the discussion and criticism of research reports).[47]

Examples in psychology

[edit]

The replication crisis served as a prelude for the emergence of many large-scale collaborative projects. Some of the most important ones include the ManyLabs project,[17][18][19][20][21] the ManyBabies project,[48][49] the #EEGManyLabs project,[50] the Reproducibility Project,[6] the Collaborative Replication and Education Project (CREP),[51][52] and the Psychological Science Accelerator (PSA).[3]

Projects surrounding the COVID-19 pandemic

[edit]

In response to the COVID-19 pandemic, several initiatives to improve collaboration between scientists from a variety of fields have been launched.[53][54]

Studies exploring the impact of the COVID-19 pandemic on behavior and mental health are conducted[55] and widely shared across social media such as Twitter.[56][57][58][59][60][61]

In this context, the PSA made a call for studies surrounding the COVID-19 disease[62] and retrieved 66 study proposals from different experts within one week.[63] At the date of May 2020, three studies have been selected and are being conducted worldwide.[63] Two of these studies aim at reaching a better adoption of health behaviors to avoid the spread of COVID-19, and one aims at helping people regulate their negative emotions during the crisis.[63] This project aims to get results that could help all countries to face the pandemic with means adapted to their population.

Challenges and future directions

[edit]

Confronting the vertical and horizontal models

[edit]

Although a horizontal model of conducting science seems promising to overcome some limits of the vertical model (see also #Limits of the traditional vertical model and #Ambitions of the horizontal model), it is difficult to empirically assess its benefits.[4] It remains unclear whether two sets of research that study the same question—either through a vertical or horizontal way of doing science—would lead to different outcomes or not.[4]

Financial independence

[edit]

Currently, to sustain a collaborative project, researchers either have to use money from their own grants or apply for fundings.[3] Such projects are thus limited in the studies they can conduct without financial independence.[3]

Coordinating hundreds of labs to conduct a study requires consequent administrative work.[3][4] Structures like the Psychological Science Accelerator (PSA) have to ensure each participating lab retrieves ethics approval to conduct a given study (see also Institutional review board). Within the PSA, this background work is currently completed voluntarily by dedicated teams or researchers[64] beside their main occupation.[3] By reaching a financial independence (as it is the case for the CERN), these projects could optimize their functioning through the opening of jobs dedicated to these missions.

Authorship in the era of crowdsourced science

[edit]

A researcher's career path in the academic world (i.e., job opportunities, grant attributions, etc.) depends on major contributions to research projects, which is often assessed through the amount of publications where one appears as the lead author.[65] In psychology, it is common practice to list authors in order of contribution, with involvement decreasing as we move down the list.[65] Truth is, it is not rare that multiple authors on the same paper contributed equally to the project, but in different ways.[66] In that sense, contributors on a project do not always get the credit they deserve—as the order the authors are listed in does not capture well the contribution of each author. This is especially true for large-scale collaborative projects with many contributors (e.g., the PSA001 project has over 200 contributors).[67]

An alternative to the current authorship system is the CRediT taxonomy, a taxonomy describing 14 distinctive categories (e.g., conceptualization of the project, administration of the project, funding acquisition, investigation) that represent the roles typically played by contributors in a scientific project.[65][68] Papers relying on this taxonomy allow for a more representative description of the involvement of each contributor on a project.

Developing open-science and crowdsourced practices

[edit]

The enrolment of students through different collaborative projects could foster open science and crowdsourced practices early in a researcher's career.[4] For instance, in the Collaborative Replication and Education Project (CREP),[51] students are taught the roots and importance of such practices toward the replication of recent major findings in psychology.

Editorial policies of scientific journals also play a role in the adoption of open science and crowdsourced practices, especially by defining new publication criteria.[4] For instance, more than 200 journals now adopt an “in principle acceptance” format of peer-reviewing papers.[69] In this publishing format, articles are accepted for publication prior to data collection, on the basis of the provided theoretical framework, methodology, and analysis plan.[69]

Remaining issues

[edit]

Flexibility in data analyses and other biases that collaborative projects should address by aggregating experts are not always overcome.[70]

It has also been shown that crowdsourced projects involving low-trained and low-involved actors (e.g., students) can lead to the falsification of data.[71] Linking up a wide array of contributors can thus imply structural problems that may impact research outcomes.

Both issues highlight the importance of educational practices on open science and crowdsourced practices (see also #Developing open-science and crowdsourced practices).

Controversies

[edit]

Controversies surrounding crowdsourced science do not directly involve a criticism of crowdsourced science itself, but rather its costs—both in terms of financial and time investment.[72] Collaborative practices in research remain very expensive and face an important number of challenges.[72] Solutions to address these challenges require important structural changes within research institutions and have important repercussions on researchers’ academic careers (see also #Challenges and future directions).

The shift from a vertical model toward a more horizontal one was partly motivated by the replication crisis in psychology. However, some authors are skeptical about the extent of this crisis in psychology.[73][74] According to these authors, the failure to replicate most findings is overestimated and mostly due to a lack of fidelity in replication protocols.[73] These claims mitigate whether large collaborative research projects are worth the cost, suggesting that a shift towards a horizontal model of doing science may not be necessary.

Given the cost of crowdsourced projects and the resources they require, crowdsourcing may not always be the most optimal approach.[72] Nonetheless, the crowdsourced science approach helped the development of tools from which any project—either collaborative or not—can benefit. An optimal approach would be a compromise between both vertical and horizontal models, which would depend on the research question at hand and on the constraints of each project.[4][72]

See also

[edit]
[edit]

References

[edit]
  1. ^ Littmann, Mark; Suomela, Todd (June 2014). "Crowdsourcing, the great meteor storm of 1833, and the founding of meteor science". Endeavour. 38 (2): 130–138. doi:10.1016/j.endeavour.2014.03.002. PMID 24917173.
  2. ^ Anderson, Samantha F.; Kelley, Ken; Maxwell, Scott E. (13 September 2017). "Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty". Psychological Science. 28 (11): 1547–1562. doi:10.1177/0956797617723724. PMID 28902575. S2CID 3147299.
  3. ^ a b c d e f g Moshontz, Hannah; et al. (1 October 2018). "The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network". Advances in Methods and Practices in Psychological Science. 1 (4): 501–515. doi:10.1177/2515245918797607. PMC 6934079. PMID 31886452.
  4. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af Uhlmann, Eric Luis; Ebersole, Charles R.; Chartier, Christopher R.; Errington, Timothy M.; Kidwell, Mallory C.; Lai, Calvin K.; McCarthy, Randy J.; Riegelman, Amy; Silberzahn, Raphael; Nosek, Brian A. (July 2019). "Scientific Utopia III: Crowdsourcing Science". Perspectives on Psychological Science. 14 (5): 711–733. doi:10.1177/1745691619850561. PMID 31260639. S2CID 149202148.
  5. ^ Flore, Paulette C.; Wicherts, Jelte M. (February 2015). "Does stereotype threat influence performance of girls in stereotyped domains? A meta-analysis". Journal of School Psychology. 53 (1): 25–44. doi:10.1016/j.jsp.2014.10.002. PMID 25636259.
  6. ^ a b c Open Science Collaboration (August 2015). "Estimating the reproducibility of psychological science". Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl:10722/230596. ISSN 0036-8075. PMID 26315443. S2CID 218065162.
  7. ^ John, Leslie K.; Loewenstein, George; Prelec, Drazen (16 April 2012). "Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling". Psychological Science. 23 (5): 524–532. doi:10.1177/0956797611430953. PMID 22508865. S2CID 8400625.
  8. ^ Kerr, Norbert L. (August 1998). "HARKing: Hypothesizing After the Results are Known". Personality and Social Psychology Review. 2 (3): 196–217. doi:10.1207/s15327957pspr0203_4. PMID 15647155. S2CID 22724226.
  9. ^ Fanelli, Daniele (21 April 2010). "Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data". PLOS ONE. 5 (4): e10271. Bibcode:2010PLoSO...510271F. doi:10.1371/journal.pone.0010271. PMC 2858206. PMID 20422014.
  10. ^ Bakker, Marjan; Wicherts, Jelte M. (15 April 2011). "The (mis)reporting of statistical results in psychology journals". Behavior Research Methods. 43 (3): 666–678. doi:10.3758/s13428-011-0089-5. PMC 3174372. PMID 21494917.
  11. ^ Nuijten, Michèle B.; Hartgerink, Chris H. J.; van Assen, Marcel A. L. M.; Epskamp, Sacha; Wicherts, Jelte M. (23 October 2015). "The prevalence of statistical reporting errors in psychology (1985–2013)". Behavior Research Methods. 48 (4): 1205–1226. doi:10.3758/s13428-015-0664-2. PMC 5101263. PMID 26497820.
  12. ^ Bakker, Marjan; van Dijk, Annette; Wicherts, Jelte M. (7 November 2012). "The Rules of the Game Called Psychological Science". Perspectives on Psychological Science. 7 (6): 543–554. doi:10.1177/1745691612459060. PMID 26168111. S2CID 19712506.
  13. ^ Sears, David O. (1986). "College sophomores in the laboratory: Influences of a narrow data base on social psychology's view of human nature". Journal of Personality and Social Psychology. 51 (3): 515–530. doi:10.1037/0022-3514.51.3.515.
  14. ^ Arnett, Jeffrey J. (1 November 2008). "The Neglected 95% Why American Psychology Needs to Become Less American". The American Psychologist. 63 (7): 602–614. doi:10.1037/0003-066X.63.7.602. PMID 18855491. S2CID 21072349.
  15. ^ a b Henrich, Joseph; Heine, Steven J.; Norenzayan, Ara (15 June 2010). "The weirdest people in the world?". Behavioral and Brain Sciences. 33 (2–3): 61–83. doi:10.1017/S0140525X0999152X. PMID 20550733.
  16. ^ a b Puthillam, Arathy (15 April 2020). "Psychology's WEIRD Problem". Psychology Today.
  17. ^ a b c Klein, Richard A.; et al. (1 January 2014). "Investigating Variation in Replicability : A "Many Labs" Replication Project" (PDF). Social Psychology. 45 (3): 142–152. doi:10.1027/1864-9335/a000178. S2CID 19617004.
  18. ^ a b Klein, Richard A.; et al. (24 December 2018). "Many Labs 2: Investigating Variation in Replicability Across Samples and Settings". Advances in Methods and Practices in Psychological Science. 1 (4): 443–490. doi:10.1177/2515245918810225. hdl:1854/LU-8637133. S2CID 125236401.
  19. ^ a b Ebersole, Charles R.; et al. (November 2016). "Many Labs 3: Evaluating participant pool quality across the academic semester via replication". Journal of Experimental Social Psychology. 67: 68–82. doi:10.1016/j.jesp.2015.10.012. S2CID 3859122.
  20. ^ a b Klein, Richard A.; Cook, Corey L.; Ebersole, Charles R.; Vitiello, Christine; Nosek, Brian A.; Ahn, Paul; Brady, Abbie J.; Chartier, Christopher R.; Christopherson, Cody D.; Clay, Samuel (2017-01-12). "Many Labs 4: Replicating Mortality Salience with and without Original Author Involvement". PsyArXiv. doi:10.31234/osf.io/vef2c.
  21. ^ a b Ebersole, Charles R.; Nosek, Brian A.; Kidwell, Mallory C.; Buttrick, Nick; Baranski, Erica; Chartier, Christopher R.; Mathur, Maya; Campbell, Lorne; IJzerman, Hans; Lazarevic, Lili (11 December 2019). "Many Labs 5: Testing pre-data collection peer review as an intervention to increase replicability". PsyArXiv. doi:10.31234/osf.io/sxfm2. hdl:20.500.11820/edfe2964-a638-4341-9191-3016833f1d88. Retrieved 2020-04-25.
  22. ^ Howe, Jeff (1 June 2006). "The Rise of Crowdsourcing". Wired.
  23. ^ Wuchty, S.; Jones, B. F.; Uzzi, B. (18 May 2007). "The Increasing Dominance of Teams in Production of Knowledge". Science. 316 (5827): 1036–1039. Bibcode:2007Sci...316.1036W. doi:10.1126/science.1136099. PMID 17431139. S2CID 3208041.
  24. ^ Merton, R. K. (5 January 1968). "The Matthew Effect in Science: The reward and communication systems of science are considered". Science. 159 (3810): 56–63. doi:10.1126/science.159.3810.56. PMID 17737466. S2CID 3526819.
  25. ^ Petersen, Alexander M.; Jung, Woo-Sung; Yang, Jae-Suk; Stanley, H. Eugene (2011-01-04). "Quantitative and empirical demonstration of the Matthew effect in a study of career longevity". Proceedings of the National Academy of Sciences. 108 (1): 18–23. arXiv:0806.1224. Bibcode:2011PNAS..108...18P. doi:10.1073/pnas.1016733108. PMC 3017158. PMID 21173276.
  26. ^ Nosek, Brian A.; Spies, Jeffrey R.; Motyl, Matt (6 November 2012). "Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability". Perspectives on Psychological Science. 7 (6): 615–631. arXiv:1205.4251. doi:10.1177/1745691612459058. PMC 10540222. PMID 26168121. S2CID 23602412.
  27. ^ Ebersole, Charles R.; Axt, Jordan R.; Nosek, Brian A. (12 May 2016). "Scientists' Reputations Are Based on Getting It Right, Not Being Right". PLOS Biology. 14 (5): e1002460. doi:10.1371/journal.pbio.1002460. PMC 4865149. PMID 27171138.
  28. ^ a b Landy, Justin F.; et al. (May 2020). "Crowdsourcing hypothesis tests: Making transparent how design choices shape research results" (PDF). Psychological Bulletin. 146 (5): 451–479. doi:10.1037/bul0000220. hdl:1854/LU-8749868. PMID 31944796.
  29. ^ Ioannidis, John P. A. (30 August 2005). "Why Most Published Research Findings Are False". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
  30. ^ Ball, Philip (26 February 2014). "Crowd-sourcing: Strength in numbers". Nature. 506 (7489): 422–423. Bibcode:2014Natur.506..422B. doi:10.1038/506422a. PMID 24572407. S2CID 4458060.
  31. ^ Polymath, D. H. J. (2012). "A new proof of the density Hales-Jewett theorem". Annals of Mathematics. 175 (3): 1283–1327. doi:10.4007/annals.2012.175.3.6. ISSN 0003-486X. JSTOR 23234638. S2CID 60078.
  32. ^ Castryck, Wouter; Fouvry, Étienne; Harcos, Gergely; Kowalski, Emmanuel; Michel, Philippe; Nelson, Paul; Paldi, Eytan; Pintz, János; Sutherland, Andrew; Tao, Terence; Xie, Xiao-Feng (28 December 2014). "New equidistribution estimates of Zhang type". Algebra & Number Theory. 8 (9): 2067–2199. arXiv:1402.0811. doi:10.2140/ant.2014.8.2067. ISSN 1944-7833. S2CID 119695637.
  33. ^ Tao, Terence; Croot, Ernest; Helfgott, Harald (2012). "Deterministic methods to find primes". Mathematics of Computation. 81 (278): 1233–1246. doi:10.1090/S0025-5718-2011-02542-1. ISSN 0025-5718. Retrieved 24 April 2020.
  34. ^ Visscher, Peter; Brown, Matthew; McCarthy, Mark; Yang, Jian (13 January 2012). "Five Years of GWAS Discovery". The American Journal of Human Genetics. 90 (1): 7–24. doi:10.1016/j.ajhg.2011.11.029. ISSN 0002-9297. PMC 3257326. PMID 22243964.
  35. ^ Van Berkel, Laura; Crandall, Christian S. (2018). Conceptual Replication of the Automaticity of Hierarchy: Multiple Methods to Test a Hypothesis - SAGE Research Methods. doi:10.4135/9781526438720. ISBN 9781526438720. Retrieved 24 April 2020.
  36. ^ Lynch, John G.; Bradlow, Eric T.; Huber, Joel C.; Lehmann, Donald R. (1 December 2015). "Reflections on the replication corner: In praise of conceptual replications". International Journal of Research in Marketing. 32 (4): 333–342. doi:10.1016/j.ijresmar.2015.09.006. ISSN 0167-8116. Retrieved 24 April 2020.
  37. ^ "Prolific | Online participant recruitment for surveys and market research". www.prolific.co. Retrieved 2020-04-30.
  38. ^ Buhrmester, Michael; Kwang, Tracy; Gosling, Samuel D. (2011-01-01). "Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?". Perspectives on Psychological Science. 6 (1): 3–5. doi:10.1177/1745691610393980. ISSN 1745-6916. PMID 26162106. S2CID 6331667.
  39. ^ Crone, Damien L.; Williams, Lisa A. (2017). "Crowdsourcing participants for psychological research in Australia: A test of Microworkers". Australian Journal of Psychology. 69 (1): 39–47. doi:10.1111/ajpy.12110. ISSN 1742-9536. S2CID 146389537.
  40. ^ Stewart, Neil; Chandler, Jesse; Paolacci, Gabriele (1 October 2017). "Crowdsourcing Samples in Cognitive Science". Trends in Cognitive Sciences. 21 (10): 736–748. doi:10.1016/j.tics.2017.06.007. hdl:1765/101603. ISSN 1364-6613. PMID 28803699. S2CID 4970624.
  41. ^ a b c Silberzahn, R.; et al. (23 August 2018). "Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results". Advances in Methods and Practices in Psychological Science. 1 (3): 337–356. doi:10.1177/2515245917747646. ISSN 2515-2459. S2CID 55306786.
  42. ^ Carp, Joshua (2012). "On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments". Frontiers in Neuroscience. 6: 149. doi:10.3389/fnins.2012.00149. ISSN 1662-453X. PMC 3468892. PMID 23087605.
  43. ^ Carp, Joshua (15 October 2012). "The secret lives of experiments: Methods reporting in the fMRI literature". NeuroImage. 63 (1): 289–300. doi:10.1016/j.neuroimage.2012.07.004. ISSN 1053-8119. PMID 22796459. S2CID 11070366. Retrieved 24 April 2020.
  44. ^ a b c d e f Moshontz, Hannah; Ebersole, Charles R.; Weston, Sara J.; Klein, Richard A. (2019-08-14). "A Guide for Many Authors: Writing Manuscripts in Large Collaborations". PsyArXiv. doi:10.31234/osf.io/92xhd. S2CID 233806155. Retrieved 2020-04-25.
  45. ^ Sakaluk, John; Williams, Alexander; Biernat, Monica (November 2014). "Analytic Review as a Solution to the Misreporting of Statistical Results in Psychological Science". Perspectives on Psychological Science. 9 (6): 652–660. doi:10.1177/1745691614549257. ISSN 1745-6916. PMID 26186115. S2CID 206778243.
  46. ^ Nosek, Brian A.; Bar-Anan, Yoav (2012). "Scientific Utopia: I. Opening Scientific Communication". Psychological Inquiry. 23 (3): 217–243. arXiv:1205.1055. doi:10.1080/1047840X.2012.692215. S2CID 6635829.
  47. ^ Buttliere, Brett T. (2014). "Using science and psychology to improve the dissemination and evaluation of scientific work". Frontiers in Computational Neuroscience. 8: 82. doi:10.3389/fncom.2014.00082. PMC 4137661. PMID 25191261.
  48. ^ Frank, Michael C.; et al. (2017). "A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building". Infancy. 22 (4): 421–435. doi:10.1111/infa.12182. PMC 6879177. PMID 31772509.
  49. ^ The ManyBabies Consortium (2020). "Quantifying Sources of Variability in Infancy Research Using the Infant-Directed-Speech Preference" (PDF). Advances in Methods and Practices in Psychological Science. 3 (1): 24–52. doi:10.1177/2515245919900809. hdl:21.11116/0000-0005-E457-8. S2CID 204876716.
  50. ^ Pavlov, Yuri G.; et al. (2021-04-02). "#EEGManyLabs: Investigating the replicability of influential EEG experiments". Cortex. 144: 213–229. doi:10.1016/j.cortex.2021.03.013. hdl:1885/295623. ISSN 0010-9452. PMID 33965167. S2CID 232485235.
  51. ^ a b Grahe, Jon; et al. (2013). "Collaborative Replications and Education Project (CREP)". OSF. doi:10.17605/OSF.IO/WFC6U. {{cite journal}}: Cite journal requires |journal= (help)
  52. ^ Wagge, Jordan; Baciu, Cristina; Banas, Kasia; Nadler, Joel; Schwarz, Sascha; Weisberg, Yanna; IJzerman, Hans; Legate, Nicole; Grahe, Jon (2019). "A Demonstration of the Collaborative Replication and Education Project: Replication Attempts of the Red-Romance Effect". Collabra: Psychology. 5 (1): 5. doi:10.1525/collabra.177.
  53. ^ Kupferschmidt, Kai (26 February 2020). "'A completely new culture of doing research.' Coronavirus outbreak changes how scientists communicate". sciencemag.org. Retrieved 24 April 2020.
  54. ^ "Crowdfight COVID-19". crowdfightcovid19.org. Retrieved 24 April 2020.
  55. ^ Kwon, Diana (19 March 2020). "Near Real-Time Studies Look for Behavioral Measures Vital to Stopping Coronavirus". scientificamerican.com. Retrieved 24 April 2020.
  56. ^ "Andreas Lieberoth 🤯 sur Twitter : "Thanks to everyone who is helping us make #COVIDiSTRESS global survey one of the largest databases on Human experiences of the Coronavirus-situation. We don't, however, cover clinical level problems - but this study does! Please help them help others ⛑ LINK" / Twitter". Twitter. Retrieved 2020-04-25.
  57. ^ "Dr. Eliza Bliss-Moreau sur Twitter : "The @BlissMoreauLab is seeking participants for a brief survey about people's emotional and social experiences during the #COVID19 #coronavirus pandemic. Please RT & share. Thank you! Link is here: LINK" / Twitter". Twitter. Retrieved 2020-04-25.
  58. ^ "Marine Rougier sur Twitter : "Were are conducting a study on the emotional and behavioral consequences of social distancing during the coronavirus pandemic. It takes 12 min (language options: English, French, German and Italian soon!). RT would be a great help 🙂 LINK" / Twitter". Twitter. Retrieved 2020-04-25.
  59. ^ "Jeff Larsen sur Twitter : "'social distancing' isn't much fun. A team of psychologists are trying to find better terms and we need your help. If you have 15 minutes to take our survey, please go to: LINK. We need Americans from all walks of life to take part, so please share widely!" / Twitter". Twitter. Retrieved 2020-04-25.
  60. ^ "Michelle Lim sur Twitter : "We are doing a study on the impact of Covid-19 on our relationships, health and wellbeing. Read more: LINK LINK" / Twitter". Twitter. Retrieved 2020-04-25.
  61. ^ "Rhonda Balzarini, PhD sur Twitter : "Twitterverse: we need your help! @Relationscience, @zoppolat and I are launching a groundbreaking study on the effects of the #COVID19 pandemic on how people connect, relate and cope during this time. Can you help us spread the word? LINK Please RT widely!" / Twitter". Twitter. Retrieved 2020-04-25.
  62. ^ Crchartier (13 March 2020). "The PSA Calls for Rapid and Impactful Study Proposals on COVID-19". Psychological Science Accelerator. Retrieved 24 April 2020.
  63. ^ a b c Crchartier (21 March 2020). "Join the PSA's Rapid-Response COVID-19 Project". Psychological Science Accelerator. Retrieved 24 April 2020.
  64. ^ "People". Psychological Science Accelerator. 24 October 2017.
  65. ^ a b c Brand, Amy; Allen, Liz; Altman, Micah; Hlava, Marjorie; Scott, Jo (2015). "Beyond authorship: attribution, contribution, collaboration, and credit". Learned Publishing. 28 (2): 151–155. doi:10.1087/20150211. ISSN 1741-4857.
  66. ^ Rennie, Drummond; Yank, Veronica; Emanuel, Linda (20 August 1997). "When authorship fails. A proposal to make contributors accountable". JAMA: The Journal of the American Medical Association. 278 (7): 579–585. doi:10.1001/jama.1997.03550070071041. PMID 9268280.
  67. ^ Jones, Benedict C.; et al. (2018-05-18). "To Which World Regions Does the Valence-Dominance Model of Social Perception Apply?". PsyArXiv. doi:10.31234/osf.io/n26dy. hdl:11577/3365866. Retrieved 2020-04-25.
  68. ^ "CRediT - Contributor Roles Taxonomy". CASRAI. Retrieved 24 April 2020.
  69. ^ a b "Registered Reports". cos.io. Retrieved 24 April 2020.
  70. ^ Chatard, Armand; Hirschberger, Gilad; Pyszczynski, Tom (2020-02-07). "A Word of Caution about Many Labs 4: If You Fail to Follow Your Preregistered Plan, You May Fail to Find a Real Effect". PsyArXiv. doi:10.31234/osf.io/ejubn. S2CID 236806340. Retrieved 2020-04-25.
  71. ^ Brown, Nick (9 May 2019). "Nick Brown's blog: An update on our examination of the research of Dr. Nicolas Guéguen". Nick Brown's blog. Retrieved 24 April 2020.
  72. ^ a b c d Silberzahn, Raphael; Uhlmann, Eric L. (8 October 2015). "Crowdsourced research: Many hands make tight work". Nature News. 526 (7572): 189–91. Bibcode:2015Natur.526..189S. doi:10.1038/526189a. PMID 26450041. S2CID 4444922.
  73. ^ a b Gilbert, Daniel T.; King, Gary; Pettigrew, Stephen; Wilson, Timothy D. (4 March 2016). "Comment on "Estimating the reproducibility of psychological science"". Science. 351 (6277): 1037. Bibcode:2016Sci...351.1037G. doi:10.1126/science.aad7243. ISSN 0036-8075. PMID 26941311. S2CID 16687911.
  74. ^ Stroebe, Wolfgang (2019-03-04). "What Can We Learn from Many Labs Replications?". Basic and Applied Social Psychology. 41 (2): 91–103. doi:10.1080/01973533.2019.1577736. ISSN 0197-3533. S2CID 150753165.