User:Sumervaid/sandbox

From Wikipedia, the free encyclopedia

Final Project Wikipedia Contribution

Arguments against open science[edit]

Low-Quality Science

Post-publication peer review, a staple of open science, has been criticized as promoting the production of lower quality papers that are extremely voluminous. [1]. Specifically, critics assert that as quality is not guaranteed by preprint servers, the veracity of papers will be difficult to assess by individual readers. This will lead to rippling effects of false science, akin to the recent epidemic of false news, propagated with ease on social media websites [2]. Common solutions to this problem have been cited as adaptations of a new format in which everything is allowed to be published but a subsequent filter-curator model is imposed to ensure some basic quality of standards are met by all publications [3].

Arguments for open science[edit]

Open Science Will Help Answer Uniquely Complex Questions

Recent arguments in favor of Open Science have maintained that Open Science is a necessary tool to begin answering immensely complex questions, such as the neural basis of consciousness [4]. The typical argument propagates the fact that these type of investigations are too complex to be carried out by any one individual, and therefore, they must rely on a network of open scientists to be accomplished. By default, the nature of these investigations also makes this "open science" as "big science" [5].

Preprint Servers[edit]

Preprint servers come in many different varieties, but the standard traits across them are stable: they seek to create a quick, free, open access, open source mode of communicating scientific knowledge to the public. Also typical of preprint servers is their lack of a peer-review process - typically, preprint servers have some type of quality check in place to ensure a minimum standard of publication, but this mechanism is not the same as a peer-review mechanism. Some preprint servers have explicitly partnered with the broader open science movement [6]. Preprint servers typically imitate the resources offered by a full journal, as they seek to make available all posted articles on Google Scholar, while also collecting data about citations. Especially unique is the feature to make articles available to the public through social media sharing which does not require potential readers to create new accounts with the preprint servers. The case for preprint servers is often made based on the slow pace of conventional publication formats. The motivation to start Socarxiv, an open-access preprint server for social science research, is the claim that valuable research being published in traditional venues often times takes several months to tears to get published, which slows down the process of science significantly. Another argument made in favor of preprint servers like Socarxiv is the quality and quickness of feedback offered to scientists on their pre-published work. The founders of Socarxiv claim that their platform allows researchers to gain easy feedback from their colleagues on the platform, thereby allowing scientists to develop their work into the highest possible quality before formal publication and circulation. The founders of Socarxiv further claim that their platform affords the authors the greatest level of flexibility in updating and editing their work to ensure that the latest version is available for rapid dissemination. The founders claim that this is not traditionally the case with formal journals, which in state formal procedures to make updates to published articles. Perhaps the strongest advantage of some preprint servers is their seamless compatibility with Open Science software such as the Open Science Framework. The founders of SocArXiv claim that their preprint server connects all aspects of the research life cycle in OSF with the article being published on the preprint server. According to the founders, this allows for greater transparency and minimal work on the authors' part. [7]


Midterm Project Below This =[edit]

Here is the contribution I have made on the Wikipeda page that is currently live. Feel free to peer-review it:

The term "open science" does not have any one fixed definition or operationalization. On the one hand, it has been referred to as a "puzzling phenomenon".[8] On the other hand, the term has been used to encapsulate a series of principles that aim to foster scientific growth and its complementary access to the public. Two influential sociologists, Benedikt Fecher and Sascha Friesike, have created multiple "schools of thought" that describe the different interpretations of the term.[9]

According to Fecher and Friesike ‘Open Science’ is an umbrella term for various assumptions about the development and dissemination of knowledge. To show the term’s multitudinous perceptions, they differentiate between five Open Science schools of thought:

Infrastructure School[edit]

The infrastructure school is founded on the assumption that "efficient" research is dependent on the availability of tools and applications. Therefore, the "goal" of the school is to promote the creation of openly available platforms, tools, and services for scientists. Hence, the infrastructure school is concerned with the technical infrastructure that promotes the development of emerging and developing research practices through the use of the internet, including the use of software and applications, in addition to conventional computing networks. In that sense, the infrastructure school regards open science as a technological challenge. The infrastructure school is tied closely with the notion of "cyberscience", which describes the trend of applying information and communication technologies to scientific research, which has led to an amicable development of the infrastructure school. Specific elements of this prosperity include increasing collaboration and interaction between scientists, as well as the development of "open-source science" practices. The sociologists discuss two central trends in the Infrastructure school:

1. Distributed computing: This trend encapsulates practices that outsource complex, process-heavy scientific computing to a network of volunteer computers around the world. The examples that the sociologists cite in their paper is that of the Open Science Grid, which enables the development of large-scale projects that require high-volume data management and processing, which is accomplished through a distributed computer network. Moreover, the grid provides the necessary tools that the scientists can use to facilitate this process.[10]

2. Social and Collaboration Networks for Scientists: This trend encapsulates the development of software that makes interaction with other researchers and scientific collaborations much easier than traditional, non-digital practices. Specifically, the trend is focused on implementing newer Web 2.0 tools to facilitate research related activities on the internet. De Roure and colleagues (2008) [11] list a series of four key capabilities which they believe composes A Social Virtual Research Environment (SVRE):

  • The SVRE should primarily aid the management and sharing of research objects. The authors define these to be a variety of digital commodities that are used repeatedly by researchers.
  • Second, the SVRE should have inbuilt incentives for researchers to make their research objects available on the online platform.
  • Third, the SVRE should be "open" as well as "extensible", implying that different types of digital artifacts composing the SVRE can be easily integrated.
  • Fourth, the authors propose that the SVRE is more than a simple storage tool for research information. Instead, the researchers propose that the platform should be "actionable". That is, the platform should be built in such a way that research objects can be used in the conduct of research as opposed to simply being stored.

Measurement School[edit]

The measurement school, in the view of the authors, deals with developing alternative methods to determine scientific impact. This school acknowledges that measurements of scientific impact are crucial to a researcher's reputation, funding opportunities, and career development. Hence, the authors argue, that any discourse about Open Science is pivoted around developing a robust measure of scientific impact in the digital age. The authors then discuss other research indicating support for the measurement school. The three key currents of previous literature discussed by the authors are:

  • The peer-review is described as being time-consuming.
  • The impact of an article, tied to the name of the authors of the article, is related more to the circulation of the journal rather than the overall quality of the article itself.
  • New publishing formats that are closely aligned with the philosophy of Open Science are rarely found in the format of a journal that allows for the assignment of the impact factor.

Hence, this school argues that there are faster impact measurement technologies that can account for a range of publication types as well as social media web coverage of a scientific contribution to arrive at a complete evaluation of how impactful the science contribution was. The gist of the argument for this school is that hidden uses like reading, bookmarking, sharing, discussing and rating are traceable activities, and these traces can and should be used to develop a newer measure of scientific impact. The umbrella jargon for this new type of impact measurements is called altmetrics, coined in a 2011 article by Priem et al., (2011).[12] Markedly, the authors discuss evidence that altmetrics differ from traditional webometrics which are slow and unstructured. Altmetrics are proposed to rely upon a greater set of measures that account for tweets, blogs, discussions, and bookmarks. The authors claim that the existing literature has often proposed that altmetrics should also encapsulate the scientific process, and measure the process of research and collaboration to create an overall metric. However, the authors are explicit in their assessment that few papers offer methodological details as to how to accomplish this. The authors use this and the general dearth of evidence to conclude that research in the area of altmetrics is still in its infancy.

Public School[edit]

According to the authors, the central concern of the school is to make science accessible to a wider audience. The inherent assumption of this school, as described by the authors, is that the newer communication technologies such as Web 2.0 allow scientists to open up the research process and also allow scientist to better prepare their "products of research" for interested non-experts. Hence, the school is characterized by two broad streams: one argues for the access of the research process to the masses, whereas the other argues for increased access to the scientific product to the public.

  • Accessibility to the Research Process: Communication technology allows not only for the constant documentation of research but also promotes the inclusion of many different external individuals in the process itself. The authors cite a recurrent term in this current of research; citizen science. the term is described as defining a term which encapsulates the participation of non-scientists and amateurs in research. The authors discuss instances in which gaming tools allow scientists to harness the brain power of a volunteer workforce to run through several permutations of protein-folded structures. This allows for scientists to eliminate many more plausible protein structures while also "enriching" the citizens about science. The authors also discuss a common criticism of this approach: the amateur nature of the participants threatens to pervade the scientific rigor of experimentation.
  • Comprehensibility of the Research Result: This stream of research concerns itself with making research understandable for a wider audience. The authors describe a host of authors that promote the use of specific tools for scientific communication, such as microblogging services, to direct users to relevant literature. The authors claim that this school proposes that it is the obligation of every researcher to make their research accessible to the public. The authors then proceed to discuss if there is an emerging market for brokers and mediators of knowledge that is otherwise too complicated for the public to grasp effortlessly.

Democratic School[edit]

The democratic school concerns itself with the concept of access to knowledge. As opposed to focusing on the accessibility of research and its understandability, advocates of this school focus on the access of products of research to the public. The central concern of the school is with the legal and other obstacles that hinder the access of research publications and scientific data to the public. The authors argue that proponents of this school assert that any research product should be freely available. The authors argue that the underlying notion of this school is that everyone has the same, equal right of access to knowledge, especially in the instances of state-funded experiments and data. The authors categorize two central currents that characterize this school: Open Access and Open Data.

  • Open Data: The authors discuss existing attitudes in the field that rebel against the notion that publishing journals should claim copyright over experimental data, which prevents the re-use of data and therefore lowers the overall efficiency of science in general. The claim is that journals have no use of the experimental data and that allowing other researchers to utilize this data will be fruitful. The authors cite other literature streams that discovered that only a quarter of researchers agree to share their data with other researchers because of the effort required for compliance.
  • Open Access to Research Publication: According to this school, there is a gap between the creation and sharing of knowledge. Proponents argue, as the authors describe, that even scientific knowledge doubles every 5 years, access to this knowledge remains limited. These proponents consider access to knowledge as a necessity for human development, especially in the economic sense.

Here is my contribution about Popular Science Writing in the History of Open Science:

Popular Science Writing[edit]

The first popular science periodical of its kind was published in 1872, under a suggestive name that is still a modern portal for the offering science journalism: Popular Science. The magazine claims to have documented the invention of the telephone, the phonograph, the electric light and the onset of automobile technology. The magazine goes so as far as to claim that the "history of Popular Science is a true reflection of humankind's progress over the past 129+ years" [13]. Discussions of popular science writing most often contend their arguments around some type of "Science Boom". A recent historiographic account of popular science traces mentions of the term"science boom" to Daniel Greenberg's Science and Government Reports in 1979 which posited that "Scientific magazines are bursting out all over. Similarly, this account discusses the publication Time, and its cover story of Carl Sagan in 1980 as propagating the claim that popular science has "turned into enthusiasm" [14]. Crucially, this secondary accounts asks the important question as to what was considered as popular "science" to begin with. The paper claims that any account of how popular science writing bridged the gap between the informed masses and the expert scientists must first consider who was considered a scientist to begin with.

  1. ^ http://ronininstitute.org/open-science-and-its-discontents/1383/
  2. ^ http://www.npr.org/tags/502124007/fake-news
  3. ^ https://thewinnower.com/
  4. ^ https://www.youtube.com/watch?v=OXFB-SDCSX8
  5. ^ https://www.braininitiative.nih.gov/
  6. ^ https://socopen.org/2016/07/09/announcing-the-development-of-socarxiv-an-open-social-science-archive/
  7. ^ https://socopen.org/2016/07/09/announcing-the-development-of-socarxiv-an-open-social-science-archive/
  8. ^ David, P. A. (2008). The historical origins of ‘Open Science’: An essay on patronage, reputation and common agency contracting in the scientific revolution. Capitalism and Society, 3(2), 5.
  9. ^ Fecher, Benedikt; Friesike, Sascha (2014). "Open Science: One Term, Five Schools of Thought". Opening Science. doi:10.1007/978-3-319-00026-8_2.
  10. ^ Altunay, M., et al. (2010). A science-driven production Cyberinfrastructure—the Open Science grid. Journal of Grid Computing, 9(2), 201–218. doi:10.1007/s10723-010-9176-6.
  11. ^ De Roure, D., et al. (2008). myExperiment: defining the social virtual research environment. In IEEE (pp. 182–189). Available at: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper. htm?arnumber=4736756.
  12. ^ Priem, J., et al. (2011). Uncovering impacts: CitedIn and total-impact, two new tools for gathering altmetrics (pp. 9–11). In iConference 2012. Available at: http://jasonpriem.org/selfarchived/ two-altmetrics-tools.pdf
  13. ^ http://www.popsci.com/scitech/article/2002-07/history-popular-science
  14. ^ Lewenstein, Bruce V. "Was there really a popular science “boom”?." Science, Technology, & Human Values 12.2 (1987): 29-41.