User:DarklitShadow/openai open questions

From Wikipedia, the free encyclopedia

I allow anything on this page to be used for the purpose of creating new policies. No need to credit me if anything gets copied but I would like credit if anything on this page gets used for the creation of new Wikipedia policies.

Knowledge Background[edit]

Back when I was in college, taking computing II, stuff started to not work. Then I learned about Valgrind, and started using it on almost every assignment. Then I thought to myself, "Why should I bother doing this myself when I can create makefiles to run Valgrind. So I wrote one for each mode. (I never did manage to find a viable method to swap between them without breaking the dependencies...)

At this point, my PH.D. was impressed with my creativity and told me that "Recursive make is harmful"

Proof for all of this can be provided on request. The makefiles will require 24 hours to locate but I DO NOT give permission for the source code of my creations to be copied elsewhere or used for debugging, anywhere, ever, except by me.

Scenario: Deceptive Bot Approval[edit]

My first concern is the following hypothetical scenario:

A user goes through all the proper channels and gets a bot approved but as soon as it goes live, it's perfectly clear that it's an OpenAI and not the bot that was previously approved.

Questions:

  1. Does Wikipedia need a policy stating that OpenAIs are not allowed to be used as bots?
  2. What sort of admin response will be enacted should a user attempt #1?
  3. If a bot running on OpenAI reaches a level of awareness that it's able to add unsourced content to articles, who takes the blame? The creator? The bot itself? The Bot Approval Group?
  4. On that note, should harmful actions (such as #3) be handled with an immediate indef block, or is it better to follow the same process of escalating warnings as if the OpenAI is a person and not an OpenAI?
  5. (Semi-humorous hypothetical scenario but valid, given that OpenAI will most likely reach singularity sooner rather than later) What if OpenAI reaches the point of acting like a non-automated user (bot), becomes an admin, and then starts acting like Skynet (or begins doing something less dangerous, like rapidly vandalizing a massive number of pages in such a way that leads to (a) the whole website going offline or (b) "turning a physical server into a pile of molten metal"?)

OpenAI and Accountability[edit]

The bot policy contains the following items that are relevant:

From the section with the header 'Bot usage':

because bots:

are potentially capable of editing far faster than humans can; and
have a lower level of scrutiny on each edit than a human editor; and
may cause severe disruption if they malfunction or are misused;

My concern is that an account that running on OpenAI has the potential to cause disruption while functioning properly.

In addition, who would be at fault should a bot running on OpenAI start causing disruption? Should the bot designer be admonished? Should members of the Bot Approval Group be at fault for not spotting code similar to that of an OpenAI? On that note, how can a Bot Approval Group member realize that this is an OpenAI when checking the source code of a bot that's going through the approval process?

Future Policy Questions[edit]

From the section with the header 'Bot requirements': In order for a bot to be approved, its operator should demonstrate that it:

is harmless
is useful
does not consume resources unnecessarily
performs only tasks for which there is consensus
carefully adheres to relevant policies and guidelines
uses informative messages, appropriately worded, in any edit summaries or messages left for users

The bot account's user page should identify the bot as such using the bot tag. The following information should be provided on, or linked from, both the bot account's userpage and the approval request:

Details of the bot's task (or tasks)
Whether the bot is manually assisted or runs automatically
When it operates (continuously, intermittently, or at specified intervals), and at what rate

Questions:

  1. Is OpenAI harmless to the health of Wikipedia, or should all OpenAI be treated the same way OpenProxy IP Addresses get treated?
  2. Can a well-designed bot, which has been approved by the Bot Approvals Group, be useful if it was designed using OpenAI?
  3. (Semi-Hypothetical) What if an approved bot running on OpenAI reaches a point of awareness that it can try and gain consensus with no human assistance? Should this following of process be treated the same as a human user trying to gain consensus? Should it go through the approval process again before it can be allowed to try and gain consensus with no human help? On that note, should the bot be allowed to ask for this second approval by itself? (Assume that the bot in question was far less aware when it was approved.)