Jump to content

Wikipedia talk:Version 1.0 Editorial Team/Article selection/Archive1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Initial discussion

Can I suggest a combination of the proposals? I like the idea of the wikilink importance test, and can see it being used well, though I think it would be much better to do the more complicated one, taking the importance of the linking pages into account (obviously, this may demand some significant resources, meaning that I'll be more comfortable running it on my development box - basically it means that it may occasionally miss a week, but a poke of me on IRC should yield the required result :)). So, perhaps we could do option A first (if we do a trial of the same four Wikiprojects, will this be OK?) and get the overall score of each article from a first iteration. We then take the average of all of these scores (of all surveyed articles) and call it X. On the next iteration, rather than using the importance tags, we can use the links in and the previous score of them. Articles which link but which don't have a score from last time get the average overall score for the purposes of determining the importance of the article (just temporarily - like if article B is under WP:MH, and article A links to it, but isn't under any WikiProject, for the purposes of determining the importance of A, B is given a pseudo-score of the average). Hope that makes sense - would it work for what we're looking for?

There would be a few things I'd need to know to do tests:

  1. What project importance scores to give Chemistry, Medicine, Physics and Mathematics WPs?
    Duh! 0.9 I guess. Martinp23
  2. What the thresholds/formulae should there be to obtain importance from links in - an article could get a score like 5000. Is this to be banded into top, high, med... or to be divided by some factor?
  3. What the threshold for inclusion in the table should be - would it be better to do, say, the top x%, or shall we have a fixed score again?

Just to clarify, I'd imagine that the importance from the second (and later) iterations would be multiplied by the quality (as rated) and then by the project importance rating - is this right? If I've said stuff that doesnt make sense here, please ask for clarification! Martinp23 16:15, 5 January 2007 (UTC)

I've done a run just now, so we've got all the scores in the database. It'll be interesting to see what becomes of the pysics projects, as they have a lot of articles with a score of 1.8 (I mean 1000+ maybe) which may get on the table with the links in criteria. How do we want to play this - just multiply quality by links in and then by the importance factor, and take all of those over, say 80% of the average overall score? I'll be able to do this tomorrow if there's a system for it by then (and fix that typo at the bottom of the tables). Thanks, Martinp23 01:47, 6 January 2007 (UTC)

Thanks a lot for jumping in and testing this! Yes, for Option A you multiply (quality x importance x projectimportance). The idea was that anything over 20 would go into Version 0.7. Can you answer a few questions for me?
  • I presume that you're testing out option A at the moment, is that correct?
  • When you say "(Physics has) a lot of articles with a score of 1.8," do you mean stubs (2*0.9=1.8)?
  • Are the data at User:MartinBotII/WP1.0/Physics? If so, it might be nice to get the bot to print the article scores into the table, certainly during the testing phase. This would become very helpful if we start using the "links in" ranking, because we may have two with very similar score before that step, but with very different link-rankings. As things are at present, we may have 1000 with identical scores - but with link-rankings every article will have its own score.
Yes, I'd like to blend the two (WikiProject data and link-rankings together, and when I designed option B I had that in mind. We will have to be very careful to weight them correctly when combining, though, otherwise one piece of data may predominate. For option B, I came up with a formula which I don't have on me, but if you want to test it out let me know. Also, if we get good results from link-rankings, we may be able to get rid of WikiProject rankings altogether, that would save us a lot of grief from "unimportant" projects. I agree that the more complex system is MUCH better, and would be worth the trouble if you have the time and knowledge. It may require a separate bot (MartinBotIII?), just to generate the link-ranking.
Playing around with AWB, I noticed that countries score VERY high on linkranking, and even something like English Channel scores quite high. Also, generic terms like acid will tend to rank higher than even quite important topics that are more specific (like sulfuric acid). In some cases the article (such as "solubility" in chemistry) may be far less useful than one on sulfuric acid - but because many chemical pages have a chembox that includes the solubility of that chemical, the solubility article will rank very high. Yet most people will know what is meant without needing to click on the link. Another thing I found, en:Wikipedia tends to be US- and UK-centric, not surprisingly. That means that Dr. Who or baseball gets far better coverage than some much weightier topics that should be better covered, such as Martinique - a country that is little more than a stub. There are cities with several million people living in them, even capitals, that are similar. There also weak areas such as business - the world's largest scientific supplier with sales of around $6 billion is Thermo Fisher Scientific, with no article at all. These weaknesses will be reflected in rankings, and we may need to try and compensate for that.
I think your proposal for generating the more complex link-ranking (using an average X to fill in gaps) sounds fine. By the time of the next run, both A and B would have numbers, and so the link-ranking will quickly move towards the true figure.
I think we all agree that WikiProject ranking will be tricky, and I think (if used) every Project will need its own individual ranking. For example, even in one area, how do you rank projects on The Beatles against Miles Davis? The Rolling Stones vs. The KLF? Dallas, Texas vs. Brussels? Arsenal FC vs the New England Patriots? If those are hard, how about comparing the KLF with Arsenal? Miles Davis vs Brussels? You may be able to say A is more important than B, but by how much? One lesson I learnt from Version 0.5 was how different perceptions of importance can be - I (as a Brit who was 17 in 1977) regard Punk Rock as one of the major music revolutions of the last century, as important as Rock and Roll, yet another reviewer (whom I respect highly) rejected the Punk Featured Article as "too narrow a subject". (We later changed criteria, so it may be in now). At the same time, other quite obscure (IMHO) topics were included. Personally I don't even like some of the rankings given under option A. The lesson - it'd be best not even to try ranking projects if we can avoid it, for it'll be a hornet's nest. For testing, I think we could put the three sciences as the same rank of 0.9, and put Maths at 1.2 or so, what do you think?
I wonder if for testing, particularly if we can use the link-rankings, we might generate several sets of results and compare them. I'd like to compare option A and B, if it's not too much trouble to test both algorithms, and see:
  • How the article lists compare. If there are significant differences, we might ask the relevant projects to give their preferences.
  • The time taken for the bot to complete its task, and the server usage for each. I'm concerned that as we make the algorithm more sophisticated, and the numbers get more decimal places, this will become a more serious problem with option A, especially since multiplication uses far more CPU time than addition (if I recall my computer science correctly!). I figure that working with simple integers and additions will reduce CPU usage by a huge amount, that was the main purpose of designing option B. It may turn out not to matter, but I'd like to see.
To close, some specific answers to your other questions. I think I'd like to stick with a threshold for now, rather than a %, but it would be great to compare the two methods if that can be done. As for the actual formula for merging the link-ranking, I'll devise something for option A if you want. I'll post the formula for Option B soon.
Thanks again! Walkerma 05:06, 6 January 2007 (UTC)
OK - before I take each of the sections of your long reply in turn, I'll just say that I'll be testing option B later, when I write the code for it - of course option A was fairly simple - all I had to do was write the database bits and add in a multiplication for it. Sp, yep, I was testing option A, using the same sort of formula you have there. About the Physics articles - I have the view of the database open right now, and it seems that there are 2955 articles with a score of 1.8, which, looking at one or two, appear to not have an imortance rating, which gives us a real life scenario for the link testing (as there are probably some fairly important topics there, which are only stubs). Also, I'll put the score in the "comments" section for the future iterations.
It will be nice to be able to drop the Wikiproject rankings - both for the inherent ambiguity in some topics and for the protects that we're going to get about it - it would be nice to have an objective measure of Wikiproject importance, though I doubt one exists. About the option B formula - I may be able to do B next weekend (A can be sooner, with most of the code being there). I'm happy to do the more complex system with the link rankings, and we can hope that we get results without too much bias (or put some compensator in there). Should the bot go through all of the Wikiprojects in 0.7 release listings in each run, or would it be doing only certain projects each time (or certain project categories). I'll measure how long it takes to update four Wikiprojects (4752 articles - lots are members of Physics WP) and we may be able to us this to estimate how long it will take for other projects. Now, I should emphasize, it's taking the program less than a second to do all of the multiplications (for each project (this is a bit of an "I reckon that..." thing - I've not timed it)). Most of the time is spent fetching the pages, which will of course be greatly increased when we're loaing a "what links here" page for every article, so probably, to counter the possibility of it taking more than an hour and messing up the other task which it's running as MartinBotII anyway, I'll put it under another account. This can be MartinBotIV (MartinBotIII is already taken :)), or a name of your choice (with "Bot" in it!).
WikiProject Physics with their lack of importance ratings will provide us with some interesting data as to how many iterations we'll actually need, as the average repeatedly increases - hopefully we'll get useable results after 3 or 4 runs.
I agree about the difficulty of project rankings - for the test I used 0.9 for them all (based on my interpretation of maths as a science), but I'll adjust it for the next run to have 1.2 for maths (as they seem to have fewer articles which reach the threshold).
I'll be happy to test both, though I suspect option B may take longer for me to do as things get pretty hectic after Tuesday. I'll put something indicative in the edit summary for the bot, and the comparison can be made in the page history - how does that sound? I'll time the two systems on both the intial and the updating runs so that we can see what the difference is (this obviously won't be very precise - will probably vary depending on how many IRC channels I'm on, and what else I'm doing at the same time...). At least, a huge difference will be very easy to spot. I'll also try increasing the number of queries to SQL server can take, which should cut a few seconds off (if I can find a way to do it). Thanks, Martinp23 12:37, 6 January 2007 (UTC)
Interestingly, it seems that this big chunk of Physics WikiProjects were all automatically tagged as stubs, ??? importance because they had {{physics-stub}} on them (Talk:Sum-over-paths). This is quite interesting, any may indicate that many Wikiprojects have far more articles (tagged as stubs) which could be in their projects. I expect that someone with an AWB bot account would be able to add all of the pages to the relevant WikiProject, if asked (I could probably do it too, with MartinBotIII (though it's not approved for the task, I doubt it would need it). What do you think? Martinp23 12:41, 6 January 2007 (UTC)
Just a quick comment as I get ready for the Version 0.5 IRC (in about 45 minutes) - I think UserKingboyk from the Beatles project wrote a script/bot to allow people to auto-tag all articles from a category like Physics-Stub. When the Biography project did their stubs like that, it added about 50,000 articles to our listing overnight! Walkerma 19:23, 6 January 2007 (UTC)
Wow- that's a lot! It's a really simple task in AWB, so he may have been using that. In any case, I suspect that if you want to, I can get a group of AWB bots (Kingboyk's, mine, Betacommand's, Alphachimp, Mets501 spring straight to mind) to go through and get all of them sorted in a fairly short time. Martinp23 19:53, 6 January 2007 (UTC)

Wikiprojects rating each other

I think we should have Wikiprojects rate each other and only intervene for disputes. It would be much faster once we notify all the Wikiprojects. The Wikiprojects will use the criteria in Option A and use some other method (including none) if the Wikiproject being rated doesn't fit into any of the categories. In the case of disputes involving categories (whether a project is in one or not), 5 random Wikiprojects will be selected to vote on it. If the dispute is about a Wikiproject that does not fit into any category, we will deal with it. Is this a good idea? Eyu100(t|fr|Version 1.0 Editorial Team) 04:18, 7 January 2007 (UTC)

The bot would get the ratings from a central page. Eyu100(t|fr|Version 1.0 Editorial Team) 04:30, 7 January 2007 (UTC)
This is certainly an interesting idea - but I'm not sure it is workable. I would fear problems because it would not be good for inter-project relations if X gave Y a much lower rank than Y thinks it deserves. If people write 100kB of discussion to defend one article's importance, what about a whole project! Also, to work well it requires almost every project to participate fully in the process - and I don't think we'd get more than 10-20% of them to participate. Projects come and go, or at least wax and wane, and work in a variety of ways. If there is a feasible way (a) to make it trivially easy to do via some kind of automation, and (b) introduce anonymity into it then it might provide some useful info, but I can't see how that could be done. IMHO any scheme to do with projects should also be discussed at WP:COUNCIL thoroughly - there are a lot of smart people over there. Are there practicable ways around the problems? What do others think?
In the meantime, I think we need to wait and see which system we end up using, because it is at least possible that we can dispense with WikiProject ranking altogether. Alternatively, we incorporate an alternative system such as I mention below in the next section. Ironically, over the long haul this system (option Y, at least) may ultimately achieve the same thing as you are advocating - ratings by consensus of other WikiProjects - but in a different way!
I have to admit, none of the alternatives right now look great, unless my proposal below is workable, so I'd love to be convinced that this could work! Walkerma 05:18, 7 January 2007 (UTC)
Also, given the number of WikiProjects, over 1000 I think, it would be really difficult to determine which projects should rate which other projects. Should Wikipedia:WikiProject Biography rate Wikipedia:WikiProject Crowded House, or should Wikipedia:WikiProject Music do so instead? Similarly, should Wikipedia:WikiProject Neopaganism rate Wikipedia:WikiProject British Columbia? With all due respect, to they even necessarily know anything about the subject? These problems, regarding interrelationship if any of the projects and comparative scope, are I think the biggest stumbling blocks. John Carter (talk) 15:01, 6 February 2008 (UTC)

Assessing Project importance from article importance

Eyu100 raised the 300lb gorilla problem we face, namely, assessing project importance in a way that projects will accept as relatively impartial. For impartial, read automated - most people will be much more accepting of a standard automated system (though they may grumble) than of anything involving individuals. I was very upset over the summer to see Wikipedians hurling mud at valued reviewers over differing views of importance, I'd rather we avoid it in the future. The automated alternatives are laid out here as option X and option Y; only the former is feasible right now, but in the longer term both might be used as part of the algorithm if we think it's best. Both options assume that we can implement the more sophisticated version of "linkranking" and similar algorithms.

Option X

Every subject based WikiProject I'm aware of has one article that you could call its principal article. For WikiProject Chemistry, it's Chemistry; for the Beatles WikiProject it's The Beatles. To assess a project's importance, we simply rank it based upon the importance of that principal article using linkranking. Not perfect, but I think it will be pretty fair in nearly all cases, and any (few, hopefully) exceptions could be dealt with by debate (and perhaps some of the ideas Eyu100 outlined just above). Also, it's simple enough that people will understand that it's pretty fair, and it's not inherently biased against any specific project.

I like this idea better than mine. It would be just as fast as that, if not faster, and there would be fewer disputes. There would be no disputes about bias because the process is automated. Eyu100(t|fr|Version 1.0 Editorial Team) 18:11, 7 January 2007 (UTC)

Option Y

This is for longer term, and it depends on another idea that is just crystallising for me right now. I've been writing sets of "trees" as navigation pages for Version 0.5, and I've always seen these as potentially growing into an entire hierarchy of articles - a tree of trees, or supertree if you like. (I even have ideas for some nifty formatting, something that had eluded me for a long time). To see some of my older efforts, see User:Walkerma/Test and click down through the hierarchy using any blue downarrows (such as SUBSTANCES -> INORG COMPOUNDS BY LAST GROUP -> OXIDES -> Fe2O3). Note that there are several ways to reach the same article, even via different branches (e.g., go via INORGANIC CHEMISTRY). Dream with me for a moment - imagine that in the future much of Wikipedia has hierarchical trees like this. In this case, we rank a WikiProject based on the position of its principal article in the various branches of the supertree. We come up with a formula for judging how important it is based on its positions in the supertree. Again - relatively impartial, and pretty fair. We could use a blend of both option X and Y if we think that is fairer.

A quick outline of how I see such a supertree growing.

DEMAND: I think once people start to play with the navigation pages on Version 0.5 they will enjoy browsing through them. I suspect people will want us to continue this in Version 0.7. Thus we should start the tree growing anyway, for the 1.0 project, unless it's impossible to scale.

SCALING UP: Writing the geographical and history pages for Version 0.5 have depended on (a) me spending about 40 hours so far writing them (not wasted - I uncovered lots of loose ends to tie up) and (b) me knowing practically every article out of the 1964. If we go to 100,000 article for 1.0, I'll either have to get paid handsomely by WMF to quit my job, or we'll have to find a way to scale up. Once we have the format pinned down (which we can perfect over the coming weeks) I think this can probably be done yet again by a bot, trawling through article categories and turning those category connections into useful navigation pages. As with Mathbot, it'll be a learning process, but I think we'll be able to get a supertree that provides both a hierarchy and a very nice index system for our releases.

MATURITY: Once (if?!) we can get a supertree generated by bot, we should encourage WikiProjects to "prune" the tree as they see fit. There may be areas where the bot generated tree is unsuitable, or where a human being can see a much better way of organising things. Ideally we should have some mechanism that lets a WikiProject organise things in their own subject area (without the bot reverting it the following month!). It may be impossible to code that, but if it can be done it should be done.

I will mull over this idea more and post a description of this idea on the 1.0 talk page soon. In the meantime please give your feedback. Sorry to woffle on so long! Walkerma 06:05, 7 January 2007 (UTC)

I feel that, despite the fact that link rankings (opt X) in this case will be fairly objective, it would be all too easy for a Wikiproject to do a text search for their top article title in google (limited to wikipedia) and wikilink each mention (maybe I'm being far-fetched - I don't know what the politics is like!). Also, thinking about this, it could be that a very small project (low rating on the scale) caters for a very important topic (which has lots of links in) in which case option X would be fine, though I feel that there will be some very important projects like Entomology, which could have fewer links as their "standard bearer" article (presumably Entomology) than to the obvious Insect. Perhaps, if we do go for option X, the highest link count of the articles in a project can count as the importance for the project (or better - the sum of all the links to all the articles). This way we remove all (inevitable) human bias from the system (except for the issue of mass linking (this may (hopefully) be a non-issue)).
For option Y - I like the idea of such a system, though it could be a little complicated to implement (and subjective, as projects determine themselves to be more important than others - I mean, is Maths a science, or a seperate subject? I say science, but many will disagree with me). I think that it would be great to have a box on all the talk pages of WP 0.7+ articles, showing their position in the heirarchy (with something (a score?) in the template call for the bot to read).
Any thoughts on my suggestions? I'm occasionally wondering (a bit) how much the Wikiproject importance matters, if an article has a high number of links (which might indicate that it is important to the community regardless). I can see how the issue might be more important without the links in importance calculations, to counter biases, though the best way to find out how well everything goes is probably by trials (though not too many!). Martinp23 18:20, 7 January 2007 (UTC)

January 2007 trials

Results at User:MartinBotII/WP1.0.

Trial running now (slowly), banding the links-in "score" (average importance of the links in) into four bands, roughly the same as the other system's values (anything with an average below 20 gets a score of its links-in rating /10). I'm using a multiplier of 0.9 for all projects again, and the scores themselves are being printed to the bot tables. Due to a slight hiccup by me, I'm not running a trial on the chemistry project right now, but will rectify the situation when I can. For WP: maths, we've got a lot more articles, so the threshold of 20 may need to be raised. I'm just waiting for the others to finish now... (we're up to Physics - the last and longest one). Martinp23 21:23, 8 January 2007 (UTC)
Exactly what was hoped for has happened on medicine diff. You can see which pages didn't originally have an importance rating by the lack of such an indicator in the table. Martinp23 21:30, 8 January 2007 (UTC)
The results seem to be fine, but a few (Start-Class, Low-Importance) should not be included. Can you explain your algorithm (does the link rating override the human rating, are ratings automatically generated)? Eyu100(t|fr|Version 1.0 Editorial Team) 04:06, 9 January 2007 (UTC)
This is great, thanks for all your help. I'm getting very excited about the way things are going - Version 0.5 is coming together nicely, while at the same time we are making progress in automating article selection. I am now quite hopeful that we can get some nice lists after a bit of work.
You're quite right- we should choose the principal article carefully - as you suggest, use Insect in place of Entomology. I think we have to trust that the Projects don't go play silly games to boost their ranking - maybe we could look at histories, or maybe even just use the importance rank as it is now! Personally, I like the idea of blending the importance from links-in (machine-generated) with the WikiProject importance rank (human generated). Maybe in time we can even compare one with the other somehow - to see how well they compare - and review manually any articles that are flagged as disparate in importance. One concern I have with relying on one single method - even with the correction for the importance of each link-in - and this would give a very different approach that might provide something of a reality check (as with Cataract on the Medicine list). However, if we can get good data just from the linkranking, maybe that's good enough for now - or we could flag any rejected articles that were ranked Top/High by any project.
The output looks excellent - it's great to see the calculated number. Can you tell us what weight is given to the linkranking in the algorithm? That weighting is critical, of course! IMHO, the importance should outrank the quality. As was pointed out, we may have to reduce the threshold - the number 20 was designed (I understand) based on the "Option A" weightings on the project page. An Start article like Xenotransplantation with about 100 inlinks probably shouldn't make it, while Cataract (188 and a "Top") should, IMHO. I'm very busy with Version 0.5 this week, but I'm hopeful that I will have a lot more time in a week or so to work on details. Thanks again for all you work, Walkerma 05:17, 9 January 2007 (UTC)
I adapted the origingal trial like this:
  • Quality:
  1. FA-Class is 7.5
  2. A-Class is 6.5
  3. GA-Class is 5
  4. B-Class is 4.5
  5. Start-Class is 4
  6. Stub-Class is 2
  • Importance (from links):
  1. Articles that are needed for completeness will have their importance rating doubled
  2. Top-importance is 7.5
  1. If the average importance of the links in is above 75 (would be v. rare at this stage - even impossible(?))
  1. High-importance is 6
  1. If the average importance of the links in is above 60 but below 75(rare)
  1. Mid-importance is 4
  1. If the average importance of the links in is above 40 but below 60
  1. Low-importance is 2.5
  1. If the average importance of the links in is above 20 but below 40
  1. No-importance is...
  1. If the average importance of the links in is below 20, the importance score is taken as the average/10 (this is just to go into the database - anything with this little importance shouldn't get into the results)
The quality score and importance score were then multiplied together, and results over 20 went into the table of results. Because I didn't want to mess around at the time, the importance indicator in the table was the one given on the article talk page, although this wasn't used in the calculations - hence some strange looking results with scores that didn't add up (ie - most of them!). As a note, due to the time it took to fetch the links data for each of the 4700-odd pages on Wikipedia, I may download the links tables and run off them (weekly or whenever the do it) - this way it will be a lot quicker. To see the time it took, take a look at the timestamps of MartinBotIII's edits... Thanks, Martinp23 20:06, 9 January 2007 (UTC)
Thanks, that makes sense now. If we opt to use this formula, we may want to have it more finely tuned, since we will not be limited to bands of importance (e.g., at simplest we could use LINKS_IN/10). The only concern I have with the scale right now is that "No importance" really means "Hasn't been assessed for importance by the project". The lowest level of importance we use is normally "Low". I really like being able to see the WikiProject's own importance rating - please keep that if at all possible.
Using the links-in data, do you have any idea how many articles there are for each importance ranking? There seem to be some things that don't make sense. For example, on Math we have Heawood graph classed by the project as Low-importance/B-Class. It has about 10 mainspace links in so it should be classed as "no importance" using the scale above, so the bot should calculate the number as 4.5 x 1.0 = 4.5, yet the bot table says 33.75 (i.e., the bot is treating it as Top importance). Fair division is a similar example (though that has quite a non-mainspace links-in, which should be ignored IMHO). As for downloading the links data, I had no idea that was possible- that could be a great help. Thanks again, Walkerma 20:49, 9 January 2007 (UTC)
Hmm - well, the "No importance" bit is just from the links rating, and was only used by me there :), though when there is no importance rating in the table, it means that the wikiproject hasn't ranked it (tell me if this doesn't make sense!). WRT the Heawood Graph (et al) problems, remeber that the bot doesn't just take the number of links, but the average importance of those links, as taken from my database - though 33.75 seems a little far-fetched. When I re-test the same system, I'm going to get all of the original data again and run it through again, to find any anomolies. Hopefully the links tables will be available - I need to take a look and work out how to implement them, but it will be a lot better to run the bot off them if we use links. Martinp23 21:02, 9 January 2007 (UTC)
Sorry - my posts above were confusing. The bands of importance I mentioned were only for my own convenience, and were determined solely on link data (human ratings had no active part in this trial) So, in theory, something could have been "top importance" from the Wikiproject, but was rated in my "no importance" band on link rankings. Perhaps the best way to get the human rating for individual pages factored in (if we want it) would be to take the Wikiproject score of the article into account in the links in (so if the article was top imp, we'd put 75 into the average with the links data). The reason that I decided to use bands rather than simply /10 was to avoid getting the really long decimal numbers from the average calculations into the database or (more importantly) the table. For those under 2, it really doesn't matter as they should never appear in the table of results (7.5 * 2 = 15, less than 20). To go back to the importance - because I didn't have the time, the importance box in the table didn't come from the links, although the score did. Sorry about this confusion - I hope this makes sense - please tell me if not. Martinp23 11:14, 12 January 2007 (UTC)
OK, I think I follow, except for one thing:
"the bot doesn't just take the number of links, but the average importance of those links."
How does the bot know what the importance of the links are, if it hasn't yet measured them? I thought we were going to do a first pass to get links-in uncorrected for importance, then use those data on the second run for judging importance. Or did you just do both passes - one uncorrected and one corrected - with the bot? By the way, can you tell us the formula you are using for doing the importance adjustment?
I also find it hard to believe that something ranked as "Low" by the project, and with only about 10 links-in, should come out in the top ranking band. What do you think? I suspect there's a bug somewhere - can you get the bot to display (a) the no. of links in and (b) the average importance of those links? What do you think? I think that would really help us see what's going on, and it would be helpful even if there aren't bugs, at least in the development phase. I think until we get the link rankings working we should avoid trying to integrate the human rankings, as these will only complicate things unnecessarily. Thanks again, Walkerma 16:58, 12 January 2007 (UTC)
Most of them were taken from the averages, hence the inaccuracy of this test (which was only intended to show what was going on with links), and others from the Wikiproject importance in the database. I only did one pass, again due to the nature of the test - for the importance, there's no adjustment (as yet), though I hope to introduce one.
(re to 2nd para written before I noticed the first :)). Yes - as do I, and things don't seem to tally in the database. I'm going to get it working off the link tables tomorrow or Sunday, and will properly test it. Would it be possible to have some ideas for how to factor Wikiproject importance into the system? I recal that there was some problem, though I don't know when, with the system, though hopefully this can be resolved in the next test. Martinp23 23:41, 12 January 2007 (UTC)
These lists get more articles, which is good, but the Wikiproject ranking should override the link ranking (the Wikiproject knows more about the subject). Also, there are some articles that are rated way too high. Eyu100(t|fr|Version 1.0 Editorial Team) 23:21, 12 January 2007 (UTC)
Yes - there are clear problems with the link rankings solely - I'll try to think of some way to include the Wikiproject ranking (unless you've got an idea (please!)). Martinp23 23:41, 12 January 2007 (UTC)
The Wikiproject rating should just override the link ranking (link ranking should only been used if the article is not rated for importance), as the Wikiproject probably knows more about the article (and used humans to rate it). Eyu100(t|fr|Version 1.0 Editorial Team) 01:21, 14 January 2007 (UTC)

March 2007 trials

Sorry for my absence. The bot is now on the toolserver, which gives it good access to a lot of (mostly out of date) data. The issue of replication lag on the toolserver should be resolved in the near future when the server is upgraded. Obviously, direct data access gives huge seed benefits, to such an extent that I was able to do a dummy run on all FA and B class medicine articles in less than 5 seconds! I'm preparing to do another short trial, including the same projects as earlier, involving two iterations - one using WikiProject importance rankings, and the other using link rankings. One of the pitfalls of the linkranking system is that for it to be completely reliable, one needs a huge set of article scores, which we'll hopefully have when full runs are carried out. To clarify, these are the same test being carried out now as earlier,but I want to see if the results are more helpful coming from the Toolserver. I'm also, at some point (possibly this week, or next), going to give this addition formula a whirl. Do you have any other ideas? I'd like to try to include link data, if at all possible, on account of the fact that it's there for us to use! Thanks, Martinp23 18:35, 4 March 2007 (UTC)

(unindent) Great to see you back! Also good news about the toolserver. I'll look forward to seeing the data. You should perhaps be aware that the Chemistry WikiProject has spun off many off the articles into its daughter, the Chemicals WikiProject. Regarding the linkranking, I don't really understand your point about one of the pitfalls. I'll sharpen up the numbers for the addition method. Would it be possible to talk on the phone this week, or perhaps on IRC if Eyu100 wants to join in, so we can clarify the various approaches? I'll send an email. Cheers, Walkerma 03:29, 5 March 2007 (UTC)

Some past ideas discussions on the importance issue, for background reading. You can see why we have to tread carefully!
Cheers, Walkerma 05:31, 5 March 2007 (UTC)
Hmm OK - this will always be a contentious issue, so it's up to us all to find either a fair or completely objective system. Re: your email - I'll reply later, but it should be fine to do at some point. This is a complete hypothetical idea, but we could use the data from this sort of thing to find out how many people are visiting pages, and assign an importance based on this. All I'd need to do is get access to this sitestats table, and I should be able to run queries on it. Of course, the best thing for now would be to trial this potential system, which would give us a reliable list of how popular pages are. However, this will doubtlessly lead to anomolies, so we would need a correction factor of some sort (for example, an article on the Xbox 360 is probably visited far more often than Hydrochloric acid or something). I'm going to iron out the bugs in the current program, and run the bot on a few trials with different importance systems - hopefully we'll get at least some good results :). Thanks, Martinp23 13:04, 5 March 2007 (UTC)
Great idea! I think that tool is very new, and it will be very helpful to us if the data are readable. I wouldn't like to rely on it, or we'll end up with a collection looking like the Kama Sutra, but it would be a great thing to include if we can. Now, I just need to start writing my article on [[Sex lives of Nintendo characters on Star Trek]] and I'll get my article ranked #1....! Thanks, Walkerma 16:52, 5 March 2007 (UTC)

bot and reviewing

I'm not sure if I understand what the results of the bot mean. Will those articles be automatically included in .7? That's my understanding. Is that correct? If so, then many articles currently nominated will probably be picked up by the bot. So, will the bot mostly phase out the individual reviewing? I can imagine the bot picking up several thousand, in addition to the 2,000 from the .5 release. What will the purpose of the Release Version be once the bot is used? Sorry, I'm a bit late coming into this and am not completely familiar with this work for .7 and ultimately 1.0. MahangaTalk to me 17:59, 17 March 2007 (UTC)

This is a very valid question. The bot should, in time, make the WP:WPRVN page mostly redundant. This is good - it took us 6 months to review less than 2000 articles by hand (and most of those were done by two or three people). If we want to get a release of 10-20,000 articles, we need to get most of them selected by the WikiProjects themselves (this is best, since I'm not really qualified to judge an article on a computer game I've never heard of, or some aspect of Australian law). We also need to get the selections done by the projects into our hands as easily as possible - and that's what MathBot does. However, IMHO we still need the "manual" part of the reviewing process for the following reasons:
  • Some articles may not have been assessed yet, and these would not be found by MartinBotII.
  • A nominator may have an article they think is a good addition to the collection, but for some reason it got missed by the bot. In such a case, a human can do a reality check, something the bot is incapable of.
  • With sets, it is hard for the bot to notice that sets are incomplete. For example, it would be silly to have all of the planets in the SS except Saturn, we want the complete set - but MartinBotII wouldn't notice that it was missing. Similar things may apply to fairness - "Why did you include all of the Champions League teams except Chelsea F.C.?" The bot can't see such things.
  • I'm a firm believer that on Wikipedia you have to consider the human aspect and the community. The manual nominations page IMHO is like the surgery in British politics - anyone can get their voice heard and make a suggestion, and hopefully get a fair hearing. We want to make sure that everyone feels they are part of the effort to make the CD. MartinBotII will have some very sophisticated algorithms to ensure fairness (as best we can), but we need to make sure that doesn't mean that article selection is perceived as a mystical process over which they have no control.
Hope that helps! Walkerma 01:43, 18 March 2007 (UTC)
I wasn't suggesting getting rid of individual reviewing. As you said, the bot will make that process mostly redundant. It'll play a much lesser role than in 0.5, I believe. One last thing, will there be an external review of the bot algorithm? By external, I mean the Wikipedia community. I can foresee some complaints if they find their article or group of articles not being included. MahangaTalk to me 16:13, 19 March 2007 (UTC)
I would be fine with creating an open explanation of the algorithms for the community's perusal. Martinp23 17:14, 19 March 2007 (UTC)
Maybe this page (the actual /MartinBotII page, not this talk page) should be set up to do that? (I should mention, though, that I have tried to mention this page regularly on the 1.0 pages etc. when relevant - so the debate isn't so much hidden, as poorly advertised.) It's quite confusing at the moment - partly because I wrote it, and partly because there are a lot of ideas on the table. That's why I haven't really promoted it on the community portal etc. - I didn't want to spend all my wikitime trying to explain details of a proposal that turned out to be unusable anyway. I think I'd like to see how the next set of trials goes, so we can eliminate the ones that are clearly unworkable or bad. Then when we have a simpler set of choices, we rewrite this page with those options, and bring it to the attention of a wider group of people. I'd like to see it Tito blog about this too. Does this sound OK? Thanks, Walkerma 22:16, 19 March 2007 (UTC)
Certainly :) - I wouldn't want to consider writing anything up before we had a set algorithm. In terms of progress here - I'm going as quickly as I can, but am getting problems with the toolserver dtabases crashing every 10 minutes. A new server is "on the way", so we should be able to make some progress next week or around then. Martinp23 22:20, 19 March 2007 (UTC)

Centralised project importance ratings

There is a page here that might be useful, as I suggested above a long time ago. We might want to try that after the test results are in if they are unsatisfactory. Of course, if they are fine, the bot should generate a bigger list of maybe half of the projects. Eyu100(t|fr|Version 1.0 Editorial Team) 03:36, 4 April 2007 (UTC)