User:Slakr/Whiteboard

From Wikipedia, the free encyclopedia

The Whiteboard[edit]

Feel free to comment on the various things I've got planned. As always, be civil, and whenever possible try to include the Dr. Evil pinky, cliché catch phrases, and any other sort of comedic relief. Try not to add any new projects unless I ask you to; otherwise, I might stumble across it and get that same feeling/look on my face as I get when I walk into a room but forget why I walked into it in the first place. That'd suck. :P

Also, don't feel as though if I have a certain idea listed that you, yourself, can't just say "screw it" and make it yourself (or create your own version of it). By all means, do so. It's less for me to have to worry about. :P If you run across any snags, need any help, inspiration, advice, or just a nerdy and/or sarcastic joke, feel free to drop me a line.

Be sure to sign your posts, but if you found your way here, you probably know that already, 'cause you're a smart cookie.



Bots[edit]

General Notes[edit]

  • It might make sense to code all bots with several things:
    • A status page to see what the bot is thinking/doing (updated at most 1/minute if the bot is doing lots).
    • A command page to send commands to the bot.
      • On bots that don't use recent changes output, this could be done with very minimal overhead by checking last modified headers on the command page every 'x' seconds/minutes.
      • On second thought, api.php would be easier to simply check for the most recent revision to the page using only the pageids prop.

Anti-marketing (RobinBot)[edit]

Watches for people adding affiliate marketing links to pages.
Account: RobinBot (talk · contribs · count · api · block log) · (emergency shutdown)
  • postponed Argh. This is postponed for the time being simply because the retroactive aspects of the database are a pain and are eating quite a bit of disk space (the text table, alone, is eating 10 gigs uncompressed). The revision info and such is another 5 or so. Bleh. It's not that ultra-high priority for the time being, though, as it's not a rampant problem. So, I'll focus on other things that are more imperative and leave this to a day when it's truly needed. Optionally, I could just get affiliate links added to the spam blacklist as a regex, which I'll go ahead and do next time I get a chance. Anyway, to the backburner you go! *poof* :\

Image Eye[edit]

Uses image similarity algorithms to implement image blacklists, locate duplicate images that aren't bit-for-bit, and link images to probable subjects.
  • This one might need to get bumped up in priority. As of 17 Jun 07, there are 13,116 total disputed images, 6629 lacking fair use, 1239 lacking any copyright tag whatsoever (I think there's a bot trying to fix those, though), and 1954 with unknown source. I have a sneaking suspicion my bot would be able to help with every single one of those categories. Meh. We'll see. I still have mounds to work on IRL long before that stuff gets done :(


Yet Another Bot for Edit Counting (YABEC)[edit]

Significantly more in-depth analyzing of RfA candidates than current statistics.
  • Currently, edit counts on WP:RfA seem weak and unreflective of intelligent categorization of contributions. People bitch about edit counting, and I agree with this, because raw numbers reflect nothing. However, edit analyzing could prove invaluable.
  • Namespace contributions.
    • Frequency. (were all of them a year ago?)
    • Size of edits (allow listing based on this).
    • Aggressive phrases (implying not assuming good faith, fighting, etc).
    • Link possible User talk: discussion with namespace edits, even if it was archived/deleted.
    • Listing out largest user talk posts.
  • Template messages.
    • Use present and past incarnations of templates to figure out the probable types of warnings that the user left.
    • Use edit summaries to more easily do this (eg, twinkle has a pretty standard form edit message that can be regexed against).
  • Searchable content of edits/edit summaries ("how many times does 'fuck you' appear?").
  • Frequency of edits.
  • List out edits lacking edit summaries with quicklinks or abstractions of the diff for easy skimming.
  • Group/classify:
    • Automatic edits.
    • Possible grammar corrections (go by edit summary or character range/word modifications; definitely could be prone to error).
      • Use [ap]spell libraries on the edit before and after to detect spelling changes.
      • Vandal-reverting edits.
        • maybe case-by-case modification of regexes for determining vandalism reversion by regex-savvy admins?
        • ex: rvv is obvious, but what about users who, upon cursory inspection, use "rev. vand." or something else instead of rvv? Instead of recrawling, we could simply apply a new regex, crawl through already-archived edits, and see the new results.
  • Centralize the individual checks (ex, Slakr (talk · contribs · count · logs · block log · lu · rfas · rfb · arb · rfc · lta · socks)) into an easier, more thorough form to include things related to, but not subject of the checks. For example, while a particular user might not be the subject of arbitration, he/she may have voted or added information on a various cases, which should definitely be included, at least, IMHO.
  • See Wikipedia:Arguments_to_avoid_in_adminship_discussions. I like the latter sections that mention editcountitis, especially things like the "could be either interacting with users, or simply adding wikiprojects tags." That's exactly some of the stuff I want the bot to detect.

CSD[edit]

Overall criteria for speedy deletion monitoring and maintenance.
  • It seems that there isn't a bot to watch for creators of pages removing CSD/db tags from pages they created themselves.
  • More often than not, these removals of CSD notices, while against policy, are done in an attempt to add a {{hangon}} tag to the page, therein replacing the CSD with the hangon.
  • Thus, I figure it'd be good to have a bot that would convert CSD tag removal attempts by creators of a page into hangon tags, and from there convert further attempts to delete CSD notices into a warning sent to the offender.
  • Hmm, incidentally this could combine with YABEC (the edit count bot, see directly above) so that while CSD is doing its normal work it looks for which editor(s) add(s) {{db}} tags, whether or not the page's creator is notified, and whether the page actually gets deleted (or gets prod'ed, or gets AfD'd, or whatever).
    • By the way, would a prod-checking bot be good as well? Could we just combine them into one? I mean, consider for PROD:
      • It can't be the prior subject of AfD.
      • It can't be a re-adding of a prod tag.
      Therefore, it would make sense to have a bot that checks to make sure everything is a-ok, and if it isn't, take appropriate action and/or notify the placer of the tag in question.
  • On another note, the bot could do the notifying (ex, "{{empty-warn|ZeeArticle}}") after a certain delay. For example, after 1-2 minutes or so have passed without the tagger (or someone else) posting a userpage warning, the bot could do the honors.
    • This is especially crucial for repeat create offenders when they're not recreating the exact same page name (and hence, no easy deletion log to reference).
  • An IRC bot to sit on #vandalism-*-* would work nicely, as well. Could just be a separate thread or two separate programs. Either way, the main bot (doing the changes) opens a local socket, while the IRC status bot connects to the local socket to communicate with the primary bot.


arrseeD (bot)[edit]

A bot for interfacing with the arrseeD socket server.
  • Compliment to arrseeD extension if we don't implement the source & php extension patches. Postponed until more mediawiki implementation details are decided on arrseeD. Ideally, it won't be needed, but either way, the burden on the servers is decreased considerably.


signbot[edit]

A bot to fill in User:HagermanBot (proactively) and do on-demand signing of unsigned comments for any talk page (retroactively).
  • Afaik, when hagermanbot was around, it only signed comments that were made while it was active.
  • It's been inactive, but I would like to revive it.
  • Possibly usurp [[Category:Non-talk pages automatically signed by HagermanBot]] and [[Category:Non-talk pages with subpages automatically signed by HagermanBot]] into our own categories, then remove the silly "inactive" notice.
  • For the most part, the proactive part is done. I'm thinking that I'll either split the retroactive part into a separate bot or find a way to easily integrate it with the current one. I have to admin, it would be fun to have "Sine" and "Cosine" :>. For now, I'll just stick that stuff in its own section:

CosineBot[edit]

  • With a twist, which would be the semi-difficult part: I want to make it on-demand crawl pages posted by people to its talk page, dive into the history, correct unsigned comments, then post a status report after it's done.
    • It would correct only when it's sure about it (ie, the comment is intact, it wasn't modifying a prior comment/block of text, and stands alone.
    • If it wasn't sure about it, or if the comment was altered, it would post an exception report back to the requesting user (probably using those neat little icons for checkuser, voting, and such.
    • It could have an IRC bot side similar to the setup I was thinking for the CSD bot, so instead of posting status reports to the talk page, it would simply PRIVMSG or NOTICE it back to the requester.
  • Allow admins to send a command to cancel sign requests in cases of vandalism to the bot's command page.
  • Put a limit on the amount of unsigned comments per page (again, in case of vandalism where someone's deliberately trying to flood the bot).
    • Bypassable by admins/bot ops.

IFD[edit]

A bot to perform maintenance tasks on WP:IFD.
  • I dunno if User:Quadell enjoys archiving stuff on images for deletion all the time, but it seems a bot could do the job, especially when removing already-deleted images.
  • It's possible it can also tag an image discussion or add it to a "IFD entries older than 5 days" page after five days have passed without action.
  • It could also automatically check for any links to the image and post them accordingly.
  • Maybe even check for warnings about prior image deletions on the uploader's talk page?


antitrouble[edit]

A bot to autoblock reverts by people reported to reporting pages (ex, WP:AIV and WP:UAA) run locally on the tool server or somewhere similar for security reasons.
  • This one would have to be de facto open source and run on wikipedia servers-- if it even gets the green light at all-- mainly due to the fact that it would need sysop privileges, which normally is never given to bots.
  • All it would do is watch for the usernames/ips reported to AIV who attempt to remove themselves. Should they do so, the bot would first revert the edit, giving them the equivalent of a finalwarn for removing it. Should they try again, the bot would block them for an hour so that another administrator would have a chance to review the WP:AIV report.
    • Obviously bots and admins would be excluded.
  • I could foresee a potential problem: a vandal could craft vandalism to the WP:AIV page in such a way that he actually reports someone reverting his edits, which might by-reflex cause them to delete the entry thinking it was vandalism.
    • Of course, should that happen, the good reverter would still receive the warning and instead would simply not revert until an admin has a chance to undo all the damage.
  • Again, this is totally tentative, due to the nature of the bot; however, it seems as though it would be extremely beneficial, since many people don't check the page history on WP:AIV, because it's assumed that entries deleted have been blocked or cancelled by an admin.
  • Ideas? Questions? Comments?


Wiki[edit]

WikiMonkey[edit]

A thinggymabobber for performing random tasks that I haven't been able to find (or haven't been able to find that work the same way) in other scripts).

  • List creation.
    • Allow any link to anything to be added to arbitrary lists for later use.
      • checkuser request lists.
      • sockpuppet evidence lists.
      • 3RR counting (and warning/reporting).
      • convert contents of Special:Listusers to various formats (including checkuser).
  • Convert {{User*|TheUser123}} and [[User:TheUser123]] links into whatever we want them to be when rendering pages.
    • In the process of doing so, tack on an extra link so that we can add a particular user to a running list (ex, "People to CheckUser").
  • Eventually maybe even integrate with arrseeD in some form.
  • Maybe a semi-template system? Ex:
    • Some people might want a toolbox that is more suited for vandal-fighting.
    • Others might be more interested in CSD/AfD processing.
    • Some might not want any of that and only want article editing shortcuts.
    In any case, instead of making people edit javascript source, it could be relatively easy to select various layouts easily and in WP-template style. Of course, this if fluff, so who knows.
  • Context-specific API calls.
    • If viewing a page with a CSD tag on it, crawl to see who placed it, when, who the article owner is, pages linking to it, etc.
    • If on article, maybe check for most recent edit and by whom.
    • ... etc.
    • Basically, kind of like popups & twinkle, but in a different way with less room for error.
  • Edit conflict checking.
    • Actually, I should probably experiment with that as an extension (using ajax), if it's not already done.
  • Pre-subst'ed warnings.
    • Instead of having to sandbox and such, we could preload all of the template messages once, then allow full customization of the end result without having to make extraneous edits.
  • Watchlist timers. Something that always annoyed me was having to go through my watchlist and delete stuff. Instead, it'd be cooler to have a timer with a certain preset expiration time to automatically remove watchlist items.
    • This could also work with various other things, too. PRODs are time based, so one could literally have a countdown to the time when a PROD article can be deleted.


Mediawiki[edit]

arrseeD[edit]

A high-volume, event-driven socket server for the recent changes list



  • Takes advantage of high-concurrency polling engines (ex, epoll, kqueue).
    • libevent will be easier to use with this, mainly because it already does the bulk of the hard work (i.e., cross-platform compatibility through choosing the best polling engine for what's available).
    • Of course, it does have a silly bug that hasn't been patched whereby some linux 2.4 kernels are misidentified as supporting epoll because the distro's glibc puts a stub header that does fuckall, while, at the same time, the distributed `./configure` doesn't check to see if it works and instead simply assumes it does if it can find the header. Pff.


  • Hopefully it'll reduce the amount of server hits that various bots, admins, and patrollers make to the squids and database on large scale Mediawiki sites (like Wikipedia).

ACL[edit]

  • Not sure about access control:
    • Throttling is a must to prevent DOS from single IPs.
    • But, what about dDOS? Couldn't someone connect a bunch of drones to the server and add trivial but consumptive regexes?
      • Yes. So, there needs to at least be some sort of optional, if not mandatory user group system.
      • I would also suggest the possibility of OpenSSL libraries, but that seems a little much for the bots that might connect to it.
      • It might be easy to simply place basal limits on server access (kind of like generic I:lines on irc servers with corresponding restrictive Y:lines), but then allocate flags to people based on what they need to do. I.e., I'd argue that most admins do not know how to properly use regular expressions, which could lead to serious problems in performance by badly-formed regexes; so, they wouldn't get an "R" flag, but they'd get an "A" flag that allowed them to have longer watchlists, blacklists, and perform queries on data for trends/whatever. Non-admins, but relatively trusted users could have a "V" or something for countervandalism.
        • If that's implemented, it would also be possible to allow linking of accounts to WP profiles.
          • Maybe a verification text? I.e., "paste this commented text on your user page while logged in."
          • They do that, and the bot recognizes the account as the same person automatically.
          • Or, the bot simply emails the person.
          • Once verified, the person is given automatic flags based his/her membership in other groups.
    • Plus, having accounts would be good for maintaining personal settings for watchlists, blacklists, whitelists, ignores, etc instead of forcing the client to send it each time it connects, which would eat lots of processing while compiling any regexes. (split to seperate regex section)


Regex processing[edit]

  • (split from ACL) Regexes would probably need to be recompiled anyway because we wouldn't simply pre-compile all regexes (ouch, think of the memory/cpu) for everyone pre-emptively.
    • A solution to this would be staggered regex compiling. That is, anticipate that we'll need to compile a bunch of regexes when someone with 1000 regexes connects (I'm thinking a bot, for example). As a result, only compile a handful at a time on each timeout/event loop, so that even though the dude has to wait longer, we're 99% certain that everyone else will not be stuck waiting for 30 seconds while the connecting users's stuff is compiled.
      • It might also be a good idea to spawn a thread that handles this and other regex processing on its own, since this would be the most obvious bottleneck on the loop (well, not really loop, but timeout loop).
      • Similarly, another thread could be dispatched to do fnmatch-style mask processing.
      • If at any time either the threads get bogged down (they'd self-time regex compiles and on-the-fly compare to normal compile times for trivial regexes so as to avoid timing error by an overloaded system), they'd trash that regex and notify the user and log it for bot ops to see.


Methodology[edit]

  • One of two routes:
    • Standalone daemon with a bot taking all the hits to a functioning RC page.
    • Integrate with the slaves, and slaves report changes to RC server, which then relays it on to admins.
      • Much more extensive, and would require a PHP extension in order to scale without problems.
      • Dunno if that's warranted, because I'm not sure of the load on the RC page. (i.e., it might be overkill)
      • However, this would have the added benefit of being able to be easily cross-wiki. That is, a blocked user on en.wp could hop on over to uncyclopedia, but one could monitor that from the centralized RC server.
    Bleh. I dunno. I'm thinking that bot route might be easiest to start off with.
    Either route still needs the basic daemon written to support people connecting to it.


  • Possibility of WALLOPS-style communication between connected users?

Administration addition[edit]

  • Possibility of extending this into a full administration daemon? That is, instead of admins being required to perform tasks via web, we could have a dedicated persistent SQL thread that async queried the database directly. This also avoids cases where vandals add ridiculous amounts of information to a page, or pages where the added material intentionally brings any potential reverter's browser to a screeching halt. This would be easy to undo:
>> .hist Star Wars /* asks server to list the current revisionhistory for the Star Wars article */
<< #       | date              | user       | flags | size   | comment
<< -------------------------------------------------------------------------------------------------------------------
<< 1165859 | 12/25/2007 11:21a | ImmaVandal | m     | 120757 | Merry christmas fuckers-- betcha can't revert this!!!!
<< /* there'd be other revisions in this style. */
>> .rollback [[Star Wars]] [[ImmaVandal]] evil vandalism. /* performs a rollback of edits on Star Wars made by ImmaVandal */
<< By your command. /* confirmation of command execution by server */
>> .hist Star Wars
<< #       | date              | user       | flags | size   | comment
<< -------------------------------------------------------------------------------------------------------------------
<< 1165860 | 12/25/2007 11:23a | Slakr      | m     | 120757 | Reverted 5 edits by [[User:ImmaVandal]]; evil vandalism.
<< 1165859 | 12/25/2007 11:21a | ImmaVandal | m     | 120757 | Merry christmas fuckers-- betcha can't revert this!!!!
<< /* etc */
  • The process could be automated by a server mode determining the output format of the replies. That is, the above format (which would probably be the end result of sprintf formatting, for example), is easy to read for humans, but a pain for bots and scripts. Thus, one could have a connection mode that switches the output format to something like the following:
>> .hist Star Wars
<< 013\tStar Wars\t1165860\t1152632156\tSlakr\t1\t120757\tReverted 5 edits by [[User:ImmaVandal]]; evil vandalism.\n\r/* etc */ \0
/* end of request */
  • Notice that the `date` column and `flags` columns changed (date is now unixtime while flags are now bitwise flags), and the columns are tab (\t) delimited instead of space formatted and pipe delimited-- much easier for parsing using strtok(), for example. Also, since we'd already know the protocol revision that the client is connecting with, we could drop off the label rows because we can assume the bot already knows what they are. Another thing is that the server sends it all on one line, terminated by a null (\0) to signal end of data, and that the entire thing is prefixed with two extra columns: 013 (which is an arbitrary response code similar to those used by smtpd and of course ircd) and Star Wars (the name of the article for which it's spilling revisions).
    • Optionally, this could be output as XML, JSON, CSV, or whatever, maybe using something like `.hist [--format=<xml,json,atom,csv,tsv,human>] [--limit=<n>] <page>`
    • Honestly, I'd rather be doing this with a binary protocol, because it's much easier to expand (not to mention faster, more secure, and binary-safe), but would leave a lot of the "newbie" developers who want to use shake-and-bake bots in the dark. Oh well.

Watch groups[edit]

  • Well, pgkbot (and its friends) made me think that several groups of people connecting to the server would want to be informed of the same body of changes.
  • Because of this, it would be both a performance increase and an ease-of-use increase, because:
    • Instead of forcing the user to maintain his own whitelists/blacklists/graylists/watchlists and such, a list of trusted users could do it instead, similar to how it's already done on the vandalism irc channels.
    • 1 regex addition is done for 100 users, instead of 100 users adding 1 regex.
    • 1 setting is applied to all 100 members of the group, instead of 100 members applying 1 setting.
  • The "WikiProject" type groups would probably like to control vandalism as well, but only on a subset of pages.
  • Maybe at some point implement a patrol-ish type system for online members of a group.

IRC bot[edit]

  • Have an IRC bot to send DCC connect-to-me requests to people within certain channels, and automatically apply group membership based on the port opened for them. Of course, this isn't by any stretch of the imagination secure or practical on large group numbers (i.e., more ports must be opened), and it's not compression-friendly, since DCC isn't compressed unless scripted client-side to make it work (which doesn't seem too practical).
  • A connected, authenticated user could add n!u@host.com masks to group lists for increasing permissions.
  • A connected, authenticated user could also send a command to authorize channel commands. E.g.:
    1. User on the server, "John," is "User:John." He adds *!*@wikimedia/John to his "authorized hosts."
    2. John12345, John's IRC nickname, connects to the irc server, joins a particular group channel: "#vandalism-rabbits-en-wp," that explicitly hopes to prevent rabbit vandalism (they were fine with getting their feet lobbed off, but damn it to all hell if they'll tolerate vandalism to the Rabbit page.)"
    3. John12345 authenticates with NickServ, automagically gets his host set to "John12345!u=someident@wikimedia/John"
    4. Bot sees this, links the nickname with same permissions John has.
    5. John12345 does an "!add watchlist editsummary !c[ae]{1,2}!i x=169 r=catch possible misspellings of carrot"
    6. Bot adds a case-insensitive preg watch on even the most Gaelic spellings of "carrot" for a week, attributing it to "John" for all of the channel members of "#vandalism-rabbits-en-wp"
  • I'm thinking this is more of a convenience thing, so it's probably going to end up being a fluff feature, if it's ever added.
    • BUT, the groups thing is definitely going to be added in one way or another.

Spamlist[edit]

Update Tim's blacklist extension to be faster and use less regexes, since we're getting close to hitting the limits giving preg an aneurysm.
  • I think the main thing is to change it to make domain blacklists into an efficient array, rather than using simple regexes (as regexes are overkill for simple domain blocks).


Adding diffs to api.php[edit]

Patch api.php to let allow diffs (so that one doesn't have to get the full contents of each revision
  • I think that I'm going to go ahead and allow multiple revision ids. Sure, at worst it will double the amount of data transferred (since for each revision one would have to get the revision directly before it, which involves another query), but since they're limited to 200 for bots and 50 for users, I think it's safe. Fifty queries are nothing, right? ;)
    • Which reminds me. api.php doesn't use memcached when I took a first look at it, but this could be wrong. I need to go check that to see if that could be added as well in order to possibly ease the query load. Meh, I'll check it out later.
  • There's a minor issue with inverted requests (ie, someone requests a page's history of revisions, but requests it invertedly (ie, from most recent to oldest). That would make diffs in the backward direction inverted to look like the older person did the opposite of what they did, since the newer person would be the most recent. I could just include that in the documentation. We'll see-- maybe some people would rather that behavior anyway. *shrug*


random[edit]

human2paperwork[edit]

Allow submissions to complex processes such as AfD and sock puppets to be simple and/or interactive.
  • Yeah, because there have been times I wanted to submit to the suspected sock puppets page but actually cringed because I remembered how painful the entire process is. Just take a look at this, for example, to see how confusing it all is-- what with all the headers, templates, and such.
  • Thus, I say we find a way to make that easier. Something that creates a subpage or a header or something that has simple questions that are easily parsed by bots to convert them into the final product-- complete with templates, datestamps, signatures, and all.
  • From the simple page, the user can paste links, evidence, comments, and such, and not have to worry about accidentally not jumping through one hoop or another. The bot would take what the user pastes (links, raw user names, comments, supports, objections, and such), and convert them over to whatever templates are necessary. I mean c'mon-- "{{user5|1=SOCKPUPPET1}}" ? Sure, the argument can be made that those reporting sockpuppets should know what they're doing, but even I, a person with an administration background on a large IRC network, think that the sockpuppet process is a pain in the ass.
    • Not even that, but then after you've added all of the evidence, using all of the cryptic templates (with the cryptic formatting that doesn't look like any language anywhere), you *still* should be placing a "suspected sockpuppet" template on the person's user page, add an edit summary, preview, and submit.
    • By the way, all during this time the sockpuppet is doing all sorts of nastiness, and once it's blocked, it simply reregisters a new account.
    • In the end, unless the process is simplified, the sockpuppeteer is actually "winning" in the battle because he's causing everyone headaches.
    • Thus, the only way to "defeat" the puppeteer is to make the process simpler.
  • Now granted, the above example is for sockpuppets, but the same could be said of checkuser or aiv as well. I find myself locating a list of user names that appear linked, then having to cut/paste them into kate, find/replace using a regex to convert them to {{checkuser|username}}format, then also add a comment. Then, preview and submit a new entry-- that's if someone hasn't already edited the page in the meantime, at which point I'll have to recopy and repaste into the page everything I just did, and hope, again, that the page isn't modified in the period of time I was copypasting (which, for RFCU, isn't that often as it's low-traffic, but happens very frequently on AIV if you don't use a script, for example).
  • SO, I'm not sure how the logistics of this will work. I originally thought that creating a temporary subpage might work, but for frequent submissions that would congest the database-- even if the pages were deleted (since deleted pages aren't really deleted).
    • It's possible to have a Submit: namespace that would require an extra template, but would automatically make user submissions fully-protected in a manner similar to User:Whomever/*.js.
      • This would require config modifications. Possibly an extension to clean up old submits? Auto-transclude onto reporting pages?
    • It would also be possible to simply allow uesrs to create a Submit/ subdirectory on their individual user pages, that a bot who monitors the RC list would crawl.
      • ex, Slakr wants to report someone to AIV. He goes to AIV to figure out how to report people, decides the person meets criteria. Slakr clicks the "submit new report" which forwards him to "User:Slakr/Submit/AIV," which is preloaded with very simple submission rows, commented out ("Who are you reporting?" and "Why are you reporting him/her?"). Slakr fills in the information, previews, saves page. Bot sees this, does magic on the specified vandal (if IP, convert to vandalip template, if user, convert to userinfo template), blanks the Submit/AIV page, signs it for the user, and posts it to AIV as soon as possible, autosigning the request with the user's signature.
        • Bot would check for blocked users before honoring submissions.


  • As a quick-hack patch alternative, it might be a useful patch to allow the "+" tab normally found on talk pages to be added per-namespace to mainpages as well. That way, a link could be added to a submission page the prepopulates the text (i.e. the simple text questions above), and lets a bot crawl the page on updates to convert unformatted submissions to standardized, formatted ones-- all without having to worry about edit conflicts and incorrect template use/formatting/etc. I like this one, much better than User: space subpages and having to add Submit: namespaces and what not. Plus, it's simpler and easier-- two things I'm enthusiastic about ;) ;) ;).