Wikipedia talk:Wikipedia Signpost/2015-02-18/Special report

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Discuss this story

Please exercise extreme caution to avoid encoding racism or other biases into an AI scheme. For example, there are some editors who have a major bias against having articles on every village in Pakistan, even though we have articles on every village in the U.S. Any trace of the local writing style, like saying "beautiful village", or naming prominent local families, becomes the object of ridicule for these people. Others can object to that, however. But AI (especially neural networks, but any little-studied code really) offers the last bastion of privacy. It's a place for making decisions and never mind anybody how that decision was decided. My feeling is that editors should keep a healthy skepticism - this was a project meant to be written, and reviewed, by people. Wnt (talk) 12:58, 20 February 2015 (UTC)[reply]

  • I agree. Many of these articles (e.g. articles on villages in Pakistan) are started in good faith by new editors trying to add information about where they live, usually in an underrepresented area here. Do we really want to discourage contributions from these parts of the world? What harm is being done, considering (for example) the amount of allowable cruft added by fan or ideological-based editors on topics primarily of interest to US editor? EChastain (talk) 14:08, 21 February 2015 (UTC)[reply]
    • Hi Wnt and EChastain. I agree. In fact, it's concerns about this sort of potential damaging behavior that lead me to start this project in the first place. A substantial portion of my scholarly work has been studies of the effect that quality control algorithms have been having on the experience of being a new editor (see my pubs and WP:Snuggle). My hope is that, by making AI easy in this way, we'll be able to develop *better* ways to perform quality control work with AI -- e.g. we could develop a user-gadget that Wikipedia:WikiProject_Pakistan members could use to review recent newcomers who work on project related articles. One way that you can help us out is by helping us build a dataset of damaging/not-damaging edits that does not flag good, but uncommon edits as damaging. Let us know of the talk page if you're interested in helping out. :) --Halfak (WMF) (talk) 17:59, 7 March 2015 (UTC)[reply]
    • Hello, Wnt and EChastain. Version control on Wikipedia has a purpose that is a lot more than simply maintaining a quality of standard for content, it also has an additional purpose to detect new users and guide these new users to be better editors. The purpose of our policies on neutrality, verifyability, notability etc. isn't intuitive to most new editors at first. One of the goal of this project is to have a system that among other things can distinguish good faith edits that inadvertently end up being damaging from malicious bad-faith edits that are intended to be damaging in the first place. After this distinction human editors would have more time focusing on guiding new good faith editors rather than wasting time reverting obvious malicious edits. For instance with such a distinction we can have the option to simply let humans process good faith edits that are inadvertently damaging. This is a community decision however. We are merely facilitating such community discussions and decisions by reducing the overall workload (by eliminating portion of problem with AI leaving more complicated aspects to humans) which tends to dissuade people from measures that may end up biting newer editors that mean well but are not fully versed in policy etc. By having a centralized AI system that shares resources from various AI tools we make systematic bias (and really any other accumulated problem) that may creep into AI algorithms - as unlikely as it may be - far more noticeable. -- A Certain White Cat chi? 07:46, 14 March 2015 (UTC)(touling viengmany)-2099985286