Wikipedia talk:WikiProject Articles for creation/March 2014 Backlog Elimination Drive

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
WikiProject iconArticles for creation Project‑class
WikiProject iconThis page is used for the administration of the Articles for Creation or Files for Upload processes and is therefore within the scope of WikiProject Articles for Creation. Please direct any queries to the discussion page.WikiProject icon
ProjectThis page does not require a rating on Wikipedia's content assessment scale.

Invitations sent[edit]

Invitations to the March 2014 Backlog Elimination Drive have been sent to project participants, using the following invitation list: Wikipedia:WikiProject Articles for creation/March 2014 Backlog Elimination Drive/Invitation list. NorthAmerica1000 02:19, 28 February 2014 (UTC)[reply]

The invitation also includes an announcement of a new helper script version. This must be a C-P error; there is no new helper script version. ~KvnG 15:44, 28 February 2014 (UTC)[reply]

Sub Page[edit]

The sub page to record reviews doesnt appear to be working for me.Makro (talk) 18:13, 2 March 2014 (UTC)[reply]

It's happening to me as well, so maybe Excirial would know something about this. Kevin Rutherford (talk) 06:13, 3 March 2014 (UTC)[reply]
It's not instant, the bot comes along every few days and updates the page. --LukeSurl t c 18:50, 3 March 2014 (UTC)[reply]

Confusion[edit]

I see two submissions I declined earlier are marked  Pass by the User:Josve05a (Click here). But they actually are not passed in real, Demian Gregory (1st one), Adele Dazim (2nd one). By {{pass}}, I understand, an article I declined for some reason was not found appropriate and was moved to mainspace by some other reviewer. But two submissions in my subpage are marked pass but they are not moved to mainspace. Am I still missing something? Anupmehra -Let's talk! 10:58, 4 March 2014 (UTC)[reply]

@Anupmehra: What you currently see is another editor who is re-reviewing your reviews. Reviews done by other editors are double checked to make sure these are done correctly - both with the intent to find problems with a users reviews and to prevent people from reviewing fast in order to achieve a higher score. The "Pass" here means that the review was checked for quality, and no problems were found with it. Excirial (Contact me,Contribs) 15:35, 4 March 2014 (UTC)[reply]
Ah! thanks. Anupmehra -Let's talk! 12:25, 4 March 2014 (UTC)[reply]

Do you have to check for copyvio everytime you review something?[edit]

I got three fails because something I declined was a copyvio. Wikipedia:WikiProject_Articles_for_creation/March_2014_Backlog_Elimination_Drive/Sintaku I didn't know that it was mandatory to check for copyvio when you review something, I normally check for copyvio before I approve or accept a AfC, but rarely if I decline. ~~ Sintaku Talk 13:45, 7 March 2014 (UTC)[reply]

Yes, it's important to check for copyright violations if the text looks at all like a finished product (no need if it just says "I have the cutest cat, and he has pretty stripes, too.") There are at least four reasons for this: (1) Copyright information has to be removed right away for legal reasons, anywhere on Wikipedia, not just in articles; (2) The submitter may continue to put a lot of work into finding references, formatting, finding images, etc., all the time not realizing that his or her work will eventually be just deleted, and that would be very discouraging. (3) If new users aren't informed about copyright issues, they will just continue creating more copyright problems. (4) If the article is declined for other reasons, the editor may stop working on it, leaving the copyvio laying around for months. —Anne Delong (talk) 14:42, 7 March 2014 (UTC)[reply]
PS- Everybody gets caught by this sometimes. It's too easy to be reading an interesting submission and forget a step, or choose for testing the only section in the article that the submitter has written themselves. —Anne Delong (talk) 15:20, 7 March 2014 (UTC)[reply]
Hi Sintaku, please take a good long look at the reviewing instructions, you might find the process flowchart particularly useful as a guide to how to do a review. Roger (Dodger67) (talk) 15:44, 7 March 2014 (UTC)[reply]
How much help can we count on getting from bots checking for copyright violations? I know they're out they're out there and I know they're checking this namespace. ~KvnG 17:25, 7 March 2014 (UTC)[reply]

Duplicate content[edit]

  • Question- How to figure out, a submission you are not familiar with already exists under a different title in the articles mainspace? Anupmehra -Let's talk! 17:32, 7 March 2014 (UTC)[reply]
Under reviewer tools in the templates on the submissions, there's a link to search Wikipedia for the title. The link is called WP, it's next to the Google and Bing search links. This searches for the exact title but also many other useful variations, section headings, redirects and article content. ~KvnG 17:40, 7 March 2014 (UTC)[reply]
Consider these two articles, one AfC page and another already existing page.
(1) AfC MYRRHA (2) Lead-cooled fast reactor
I'm not sure if the WP search engine would be working in this case. Anupmehra -Let's talk! 18:02, 7 March 2014 (UTC)[reply]
Reading the lead gets you there though. If we miss identifying duplicate or merge opportunity in review and create a second article on a notable topic, that is a mistake that can be corrected with a merge proposal later on. We're encouraged to accept articles which have as little as 50% chance surviving WP:AFD. I personally apply the same relatively permissive attitude to other potential justifications for declining. ~KvnG 21:38, 7 March 2014 (UTC)[reply]

Rejection priority[edit]

Workflow flowchart

As mentioned in the thread above, there is a specified priority order for choosing a reason for rejection. I've come across several cases where a previous reviewer has given a NPOV or formatting reason for rejection without mentioning that there is also a notability issue. The authors then spend time improving an article on a non-notable subject. When the hammer finally comes down, they get (rightfully) angry and accuse us of moving the goal posts. Follow the flowchart to avoid these problems. If you reject a submission for lesser infraction than notability, leave a comment indicating that you believe the topic is notable. Such comments have been helpful to me. Communication like this will keep us from contradicting one another. ~KvnG 17:36, 7 March 2014 (UTC)[reply]

Where does unambiguous advertising or promotion fall on the chart? Is it a lesser infraction than notability? I reject articles on the basis that they read like advertisements before I reject them based on notability. (I'm not referring to a lack of neutrality, which we can help editors address, but straight ahead ads or entries that exist only to promote the subject.) Does notability come before advertising as a reason to decline? (Apologies if the answer is obvious -- I love the flow chart -- I'm just unclear on this.) Thanks! Julie JSFarman (talk) 01:56, 8 March 2014 (UTC)[reply]
The difference is that promotional tone in an article can be fixed, but a non-notable subject can't be overcome. If an article is very promotional and you decline it as such, but is also about an obscure subject which isn't notable in itself (for example, a musician who has never played on stage or recorded a track), it's important to leave a comment referring the editor to the appropriate notability page. Usually it's better to decline for lack of notability, and then ask for the tone change in the message. —Anne Delong (talk) 05:05, 8 March 2014 (UTC)[reply]
"Unambiguous advertising or promotion" would fall under the "Neutral point of view?" in the flow chart. It's near the end. Definitely after notability. A thorough reviewer would list all reasons for declining (and suggestions for overcoming these) in comments but the first decline reason you encounter while going through the flowchart is what you should select in the dropdown when declining. ~KvnG 21:58, 8 March 2014 (UTC)[reply]
Got it. Notability from the drop down, and then details/guidance on additional issues. Thanks. JSFarman (talk) 04:23, 9 March 2014 (UTC)[reply]
Hold on just a moment. Unfixable blatant spam is Speedy Deletion criterion G11, so it should be included in the "Quick Fail" area, we need to fix this. NPOV is about biased language that can be fixed. Roger (Dodger67) (talk) 19:19, 13 March 2014 (UTC)[reply]
Who can edit the flowchart to fix this problem? Roger (Dodger67) (talk) 08:42, 25 March 2014 (UTC)[reply]

Question[edit]

If a submission page is declined for reasons present low in the priority table by some reviewer, should it be marked "poorly-done-review" ({{AFCDriveQC|F}})? Anupmehra -Let's talk! 10:21, 12 March 2014 (UTC)[reply]

I would "fail" the review only if the reviewer missed a critical problem such as copyvio, blatant advertising, vandalism, attack - the "quick fail" criteria or problems that would result in speedy deletion. Roger (Dodger67) (talk) 09:37, 13 March 2014 (UTC)[reply]
Seems a bit harsh and bureaucratic. I would suggest adding a new comment to the review and {{ping}} the reviewer so the reviewer and author know there are other issues with the submission. ~KvnG 15:13, 13 March 2014 (UTC)[reply]
Passing/Failing reviews is only relevant to the reviewing "scores" of backlog drive participants, it does not involve adding or removing anything on the draft concerned. The draft author doesn't even know it's happening. If a reviewer can't handle a "harsh" fail of one of their reviews they have no business doing reviews - "if you can't take the heat...". It's one of the reasons why inexperienced newbies should not be doing reviews. Roger (Dodger67) (talk) 19:07, 13 March 2014 (UTC)[reply]
In such a case, I would "fail" the review if: 1) there was an obviously more important failure reason, 2) the listed failure reason was obviously patently false, or 3) the article could easily have been fixed and accepted AND it "should" have been fixed and accepted (that is, if the submission was likely to wind up at AFD and barely survive to start with, and the failure reason was "fixable" but the potential AFD-issues were ot, I'm not going to fail the reviewer). When I say "obviously" more important I'm not going to assume the reviewer has the chart in front of them. Rather, I'm going to focus on blatant policy issues that any competent reviewer knows are show-stoppers, such as an obvious blatant copyright violation. davidwr/(talk)/(contribs) 02:12, 15 March 2014 (UTC)[reply]

Reviewers[edit]

I have been re-reviewing and have not been added to the list of reviewers. How long does it normally take to update the list.Makro (talk) 15:33, 17 March 2014 (UTC)[reply]

  • Hello Makro- I have noticed it does update once in 48-72hrs. I find a thread on this page in Sub Page section that reads, "It's not instant, the bot comes along every few days and updates the page". Anupmehra -Let's talk! 16:05, 17 March 2014 (UTC)[reply]

Spread the re-review efforts around, but focus them where needed[edit]

As you re-review, consider focusing on reviewers who have had relatively few re-reviews, and consider spacing your efforts out.

It's far better to re-review 10 reviews each from 10 different reviewers than to re-review 50 reviews each from 2 reviewers. You get to give your comments to 10 different people and in the process see 10 different people's "reviewing style." The 2 reviewers who would have seen 50 comments by you will benefit from others re-reviewing more of their articles, and the 8 reviewers who would have missed out on your take on their reviews can benefit from it.

I would make an exception for newcomers to the AFC process: If a reviewer is new and has only reviewed 1-2 dozen submissions, ideally his reviews should be re-reviewed quickly. The same goes for reviewers with a relatively high "fail" rate.

It's also a good idea to re-review anything that has received 1 "failing" re-review but which has not had a second re-review yet. davidwr/(talk)/(contribs) 21:10, 27 March 2014 (UTC)[reply]

Suggested editor reminder for future drives[edit]

There have been problems where a very prolific reviewer's sheer speed has called the quality of his reviews or re-reviews into question.

For this reason, I recommend that the scoring software flag editors who have a large number of rapid-fire reviews or re-reviews, particularly if the editors have not participated in previous backlog drives. Flagged editors should have a polite note added to their talk page reminding them that quality is much more important than quality. davidwr/(talk)/(contribs) 21:51, 27 March 2014 (UTC)[reply]

Recommended scoring change: penalize rapid reviewers[edit]

To discourage rapid-fire reviewing and re-reviewing, deduct 1/2 point for every review or re-review over 30 in any rolling 60 minute period in future drives. Or, conversely, give double points if the person has less than 30 reviews or re-reviews in the last 60 minutes. davidwr/(talk)/(contribs) 22:10, 27 March 2014 (UTC)[reply]

But what if one would stock up a lot of rereviews before saving the page? Or if one would find a "really bad" user with many terrible reviews? (tJosve05a (c) 22:13, 27 March 2014 (UTC)[reply]
With respect to re-reviews, a rolling per-6-hour rather than per-1-hour limit would make this a practical non-issue. On the other hand, a limit of 180 wouldn't do anything to deter a rapid-fire reviewer or re-reviewer if they weren't going to hit that limit anyways due to only spending a short time doing AFC work in any given 6-hour period. davidwr/(talk)/(contribs) 22:48, 27 March 2014 (UTC)[reply]
Arbitrary limits or bright-line rules tend to be a fairly bad starting point in most cases. Ten extra lines of code to AFCBuddy's code base would allow it to place 29 random reviews every 60 minutes. That would be silly, but it would keep me within the specified limitations even though my reviews were garbage. On the other hand i could paste 31 rereviews to a notepad over a timespan of two hours and then place them in batch once done. In that case it would look as if i did 31 reviews in an hour and violated the rule, even though i did decent work.
If anything i would say quis custodiet ipsos custodes?. If every reviewer were check a handful of reviews AND a handful of rereviews this would result in a situation we would effectively check everyones work with relatively little effort. If a users reviews are bad a rereviewer should spot it. If a rereviewer does a shoddy job a second user reviewing the rereview would spot it. I am aware that would take some cooperation across the entire drive and it would absorb some extra time but i do believe that this would allow us to spot problems before they get out of hand. Let me know if this doesn't make sense (Sleepy head is sleepy) Excirial (Contact me,Contribs) 00:12, 28 March 2014 (UTC)[reply]
I share Josve05a's concerns - such an arbitrary limit could easily be hit by legitimate reviewing. Sam Walton (talk) 22:26, 27 March 2014 (UTC)[reply]
Part of this problem may be my fault, since I am the one who first suggested that re-reviewers get points as well as reviewers. We may have overdone it; maybe points for re-reviews should be limited to a maximum number, or only given half-a-point, since they often take less time than the original reviews. —Anne Delong (talk) 22:32, 28 March 2014 (UTC)[reply]
It might not be a bad idea to separate them into a separate category so that people cannot rapidly increase their totals during the last few days of the drive. Kevin Rutherford (talk) 02:54, 29 March 2014 (UTC)[reply]
Granted hard limits are difficult, but how about a limit per user? No more total points for reviews than ten reviews on each participant? Or no more than 2% of the total questions available for review? You can still review as many as you want, but the total points is capped.The Ukulele Dude - Aggie80 (talk) 11:35, 7 April 2014 (UTC)[reply]

Diligent Participation award[edit]

A suggested award for the next drive: A Diligent Participation Award for those who participate in AFC on at least 15 days of the calendar month of the drive (UTC time) and a special version of the ribbon for those able to participate 25 days out of the month.

Participation is defined as doing anything project-related, such as:

  • Things a score-keeping bot can count, such as
  • Editing any draft (other than one that that editor created)
  • Editing any AFC project or discussion page including the Help Desk (minus edits which mention a page the editor created or which are in a section titled the same)
  • Editing templates, scripts, and other pages in AFC-project categories or otherwise known to the score-keeping bot (minus edits which mention a page the editor created or which are in a section titled the same)
  • Things obviously related to AFC that the bot can't count but which are self-declared and readily-verifiable, such as
  • working on code that is stored off-wiki
  • working with an editor on his talk page regarding a submission
  • anything else obviously AFC-related and easily verifiable
  • After-the-fact adjustments for the score-keeping bot's rules flagging something as "not to be counted" but it really should be.
These adjustments will likely only be requested by those editors who are "close to" the 15- or 25-day threshold or who spend most days only doing project-related tasks that the bot can't count.
Hard work should be rewarded, but working too hard can make you insane.

This award is designed to acknowledge those diligent editors who, even if they can't review several hundred submissions during the month, do try to put in a few minutes each day into the project. It is also designed to be an award that a newcomer to AFC can easily get.

Why 25 and not the whole month? I don't want to encourage people to not take one day a week off to keep from going insane, and I want to honor those whose religion requires a "day of rest" or something similar. Also, I don't want to encourage people who really are too busy on a given day to "make a minor edit" just so they don't "miss a day." davidwr/(talk)/(contribs) 17:57, 28 March 2014 (UTC)[reply]

I like it. In my case it would be pretty easily to get, just the way I work, which is typically checking in every morning and hitting a couple reviews. And I typically hit a couple of times during the day. But I think any way to recognize what others are doing is a good thing. It is too easy to actively do things and never get any sort of recognition or acknowledgement that one's efforts have even been noticed. The Ukulele Dude - Aggie80 (talk) 11:43, 7 April 2014 (UTC)[reply]
This is a good idea, but I don't think that an editor should get more than one award during the drive. If a reviewer has reviewed a large number of submissions, he or she should get one of the existing awards. Also, is there a reason why this award couldn't be given out manually during months when there is no drive if a reviewer notices a new regular participant and wants to be encouraging? —Anne Delong (talk) 12:51, 7 April 2014 (UTC)[reply]

Conclusion of drive[edit]

Hello all. This drive has been a rewarding experience for me and I hope the same for all else who participated. Forgive me if I am being impatient, but the drive has been over for nearly 10 days, and we have seen no editing activity from the designated co-ordinator since then. I have to wonder if we should seek another editor to bring conclusion to the drive and issue the awards. Any thoughts on this matter? Thanks — MusikAnimal talk 00:09, 10 April 2014 (UTC)[reply]

I agree. We should probably get someone else to conclude the drive and award the stars. ~~ Sintaku Talk 07:06, 10 April 2014 (UTC)[reply]
I'm sorry to hear @Fremantle99: hasn't concluded this. It was their idea. I don't have the mass-message right normally used for the awards although I could hit the user talk pages the old-fashioned way. Also, I'm not clear on if there's consensus to invalidate both Makro's and Belshay's contributions to this drive. Perhaps @Northamerica1000, Hasteur, and Technical 13: could advise. Chris Troutman (talk) 08:05, 10 April 2014 (UTC)[reply]
In all fairness... When the drive itself was started the AFC Drive tab had a very shiny "New Drive!" link that would start a new drive if you clicked it and saved the resulting page. That link could have used some form of Molly guard around it to prevent accidents, though it was fairly amusing to monitor how long it took for someone to unintentionally create a new drive (Which was bound to happen). Unfortunately the link did not log clicks - i am still curious how many people were drawn to click the link without saving the resulting page. Excirial (Contact me,Contribs) 22:07, 10 April 2014 (UTC)[reply]
I know I clicked the link a few times before someone else created it...(tJosve05a (c) 10:34, 12 April 2014 (UTC)[reply]

Awarding process has begun[edit]

I have began the awarding process, and will update here accordingly as it progresses (it takes time). NorthAmerica1000 09:58, 12 April 2014 (UTC)[reply]

Awarding completed[edit]

Awarding has been completed checkY. NorthAmerica1000 11:10, 12 April 2014 (UTC)[reply]

So the discussion on reviews and their applicability, as well as double reviews had no affect on the awards? It had appeared that there was consensus on what was going to be done. I am very disappointed in that. The Ukulele Dude - Aggie80 (talk) 13:39, 12 April 2014 (UTC)[reply]
I was not aware of the discussion at Wikipedia talk:WikiProject Articles for creation until I received the notification via ping at that discussion. I sent out the awards based upon the leaderboard and statistics at March 2014 Backlog Elimination Drive that are generated by User:Excirial/AFCBuddy. People in the thread above were concerned about when they would be sent out, so I sent them. I have watchlisted the talk pages concerning this matter, in hopes that a prompt resolution can be devised regarding any problems. NorthAmerica1000 14:47, 12 April 2014 (UTC)[reply]
N.b. Per the discussion at the project's main talk page, some of these awards that were sent may be modified if a consensus develops there to do so. NorthAmerica1000 18:13, 12 April 2014 (UTC)[reply]
(edit conflict) I specifically brought up the issue of negating Makro's and Belshay's awards in the sentence just prior to me pinging you. I appreciate you handing out the awards as you have the mass-message rights, but it looks silly when you hand out awards to users that have been criticized by multiple editors for their edits. Chris Troutman (talk) 18:19, 12 April 2014 (UTC)[reply]
As I understand it we have a practically unanimous consensus to disqualify Makro, but Belshay's status is not yet clear. The actual clerk of this drive has abandoned ship for some unknown reason, hence the discontinuity. Roger (Dodger67) (talk) 21:30, 12 April 2014 (UTC)[reply]
As I said above, I was not aware of the discussion at the project's main page until I was pinged there, which occurred after the awards had been sent. Nobody there or here thought to post a notice on this page about that discussion prior to the awards being sent, and I'm not pyschic. I will respond further there, so there aren't two discussions occurring about the same matter. I'm sure we can all work together to resolve this matter. NorthAmerica1000 23:50, 12 April 2014 (UTC)[reply]

Awarding modified and now completed[edit]

The awards sent have been modified per consensus at !Vote requested to clarify matters about awards sent. The March 2014 Backlog Elimination Drive leaderboard has been modified accordingly per the award rescinding and updating that occurred. NorthAmerica1000 06:09, 15 April 2014 (UTC)[reply]

Not quite complete - the "new" 9th and 10th positions have not been awarded yet. Roger (Dodger67) (talk) 08:00, 15 April 2014 (UTC)[reply]
Actually, all people who qualified for an award per the overall totals in the Running Total section have been awarded on their talk pages. However, for clarity and to provide credit, I have updated the leaderboard up to ten participants. NorthAmerica1000 08:30, 15 April 2014 (UTC)[reply]
Excellent! Roger (Dodger67) (talk) 09:15, 15 April 2014 (UTC)[reply]