Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


July 28

[edit]

Turning Off Ad Blocker

[edit]

If I am using Firefox and Windows 11, and a web site asks me to turn off my ad blocker, and I don't know what ad blocker I am using, how do I determine what ad blocker I am using, so that I can turn it off? This is sort of a strange question, because I don't want to deal with ads, but I would rather just ignore the ads than deal with web sites that aggressively fight ad blocking. I do have Norton Safe Web. I don't know if it tries to block ads. Robert McClenon (talk) 00:23, 28 July 2024 (UTC)[reply]

Hi, Robert McClenon! In Windows 10 which I'm on, such blockers appear (with other things) in a drop-down list of 'Extensions' found by clicking a jigsaw-puzzle corner piece at the top right corner of the Firefox window. I had a similar problem (with YouTube) until I discovered that Malwarebytes had added ad blocking to its functions. Hope this helps. {The poster formerly known as 87.81.230.195} 94.2.67.235 (talk) 20:54, 28 July 2024 (UTC)[reply]
The better way to get around this is to reconfigure your RAM firewall to exclude cloud analytic encryption on your virtual platform. Pretty simple fix, will take about 30 seconds. Jidarave (talk) 21:49, 28 July 2024 (UTC)[reply]
The above post is nonsensical. Philvoids (talk) 22:53, 28 July 2024 (UTC)[reply]
This is probably the same troll who reappears here periodically to post gibberish like this before getting blocked. CodeTalker (talk) 06:52, 29 July 2024 (UTC)[reply]
It looks like perfectly legit response from AI. Maybe someone is training their bot to replace human editors on Wikipedia. 75.136.148.8 (talk) 12:50, 29 July 2024 (UTC)[reply]
The responses by LLMs tend to read like something a human could actually have written as a reasonable response, using terms from the question. This response does not. It looks contrived to sound impressive while making no sense whatsoever.  --Lambiam 22:51, 29 July 2024 (UTC)[reply]
I'm glad it's confirmed to make no sense, because although I didn't understand it, I feared that to be because of my limited knowledge of IT.
This sidetrack being dealt with, can anybody give a better answer than mine to the OP's query? {The poster formerly known as 87.81.230.195} 94.2.67.235 (talk) 18:03, 30 July 2024 (UTC)[reply]
Will I need a turboencabulator for this? -insert valid name here- (talk) 16:36, 6 August 2024 (UTC)[reply]
A long shot, but are you using NoScript in Firefox by any chance? It's a kind of an ad-blocker on steroids and because it completely blocks scripts (unless you allow them) it can be difficult to know what's getting blocked. Using the "temporary allow" function usually gets me past things, but sometimes I end up having to switch to incognito mode (which I have setup to run without NoScript). Matt Deres (talk) 16:02, 31 July 2024 (UTC)[reply]
In Firefox's Privacy and Security settings, if you have tracking protection set to Strict, try setting it back to Standard. This should help. win8x (talking | spying) 20:19, 31 July 2024 (UTC)[reply]

July 29

[edit]

Can you delete and then undelete your twitter account?

[edit]

This is about the incident with Pete Souza and an intactly-eared Donald Trump photo.[1] Souza apparently deleted his twitter account after both he and the Trump photographer took heat. Question: can he undelete it later, and get his old tweets back? I mean using normal Twitter features. Presumably someone famous like Souza could get Elon's, um, ear for a special request, but let's not count that. Thanks. 2602:243:2008:8BB0:F494:276C:D59A:C992 (talk) 23:39, 29 July 2024 (UTC)[reply]

According to [2] you have 30 days to reactivate your account. After that, it cannot be recovered. RudolfRed (talk) 15:50, 30 July 2024 (UTC)[reply]


July 31

[edit]

Alternating between dark and light mode?

[edit]

I work on Google Sheets spreadsheets all day and I don't like how bright light mode gets during sunset. I don't like the look of dark mode either, so I'd prefer to only use dark mode during the sunset time or night. Does anyone know if this is a thing that Google has allowed for? Is there an extension that does this? I could try making a Chrome extension but this is not something I have done before. ―Panamitsu (talk) 05:46, 31 July 2024 (UTC)[reply]

Google Maps does this automatically when in satnav mode, but I don't know if it is extendable to other products. -- Verbarson  talkedits 10:30, 31 July 2024 (UTC)[reply]
Neither of the above are exactly what you want, but it might come close. The "colour temperature adjustment" apps might be closer to what you actually want, since it isn't dark mode. Things get slightly more 'red' when it is active, but your eyes will get used to it - I use it myself. Komonzia (talk) 20:57, 3 August 2024 (UTC)[reply]

August 1

[edit]

Tweaking the "Format Axis" options in Excel

[edit]

If you have a chart or pivot chart in Excel, you can double click on the X axis to bring up a pane that gives you all kinds of formatting options, including setting minimum and maximum bounds for the graph. So, you can set the maximum value to be $1,000 and any values that go above that either don't display or are cut off (depending on the chart type). My problem is that, I need to set such a maximum, but I want it to truly act like a maximum value instead of a set number. Like, if the user's choice of filters means that the chart never comes near the boundary I set, then the upper and lower limits should behave dynamically. Is there a way to do that?
If that's hard to picture, here's an example: we sell 10-20 apples and 200-300 oranges each month, so the monthly totals are mostly around 250 or so. But one month we had a crazy value: we sold 10,000 oranges. The chart is unreadable if we leave the defaults in, so we set a maximum value of, say, 350. Now we can read it. But if the user selects "apples" from the filter, the graph becomes unreadable the other way around: we've forced the upper bound of the X-axis to be 350 and the apples are now just a ripple across the bottom of the chart. What I want is for Excel to dynamically resize the chart like it normally does, but not go past my maximum. Can it be done? Googling has not been fruitful so far. Matt Deres (talk) 19:11, 1 August 2024 (UTC)[reply]

I understand that you do not want to chart dynamically all the monthly apple totals. Instead you may chart dynamically the minimum of two values, viz. each month's apple total and a numerical limit of 350. Use the Excel MIN function. Philvoids (talk) 18:19, 2 August 2024 (UTC)[reply]
Sorry, that is completely unrelated to what I'm talking about. I'm trying to control the way the graphs establish limits to the X-axis. Matt Deres (talk) 01:43, 3 August 2024 (UTC)[reply]

August 3

[edit]

newline

[edit]

If you use {{subst:Welcome-newuser|heading=no}} it insert an extra newline at the top. Can that be fixed? Thanks, Polygnotus (talk) 02:20, 3 August 2024 (UTC)[reply]

This is not a good place to ask; try instead Template talk:Welcome, or, if that yields no response, Wikipedia:Village pump (technical). I can report that the extra newline issue already appears for just {{Welcome|heading=no}}.  --Lambiam 11:47, 3 August 2024 (UTC)[reply]
Thank you, I moved it there. Polygnotus (talk) 15:09, 3 August 2024 (UTC)[reply]

August 4

[edit]

Karnaugh map/gates

[edit]

As of now, is the 'or' gate better at handling electricity vs the 'and' gate?Afrazer123 (talk) 01:29, 4 August 2024 (UTC)[reply]

That will depend on the implementation, and what electrical state represents the 0 or 1. And it also depends on what you mean by "better". (faster, less energy wasted, smaller, least noise, least sensitive to noise, fan out, tolerance of power supply variation, cheaper, higher yield). Graeme Bartlett (talk) 10:47, 4 August 2024 (UTC)[reply]
Yes. The Karnaugh map is useful for showing logical relations between Boolean data types that take only values "1" (true) or "0" (false). Elementary Logic gate functions such as AND, OR, NAND, NOR, EXOR, etc. can be mapped, also combinations of connected gates if they are not too complicated. However Boolean data is abstract and the "1" and "0" need not always correspond to electric voltages. Karnaugh maps are equally applicable to fluid logic that uses water instead of electricity. For example a fluid OR gate is simply two pipes being merged. Philvoids (talk) 17:15, 4 August 2024 (UTC)[reply]
"better": less energy wasted, higher yield. Afrazer123 (talk) 05:52, 5 August 2024 (UTC)[reply]
CMOS (Complementary metal–oxide–semiconductor) logic devices have low static power consumption and do not produce as much waste heat as other forms of logic, like NMOS logic or Transistor–transistor logic (TTL), which normally have some standing current even when not changing state. Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. There is no significant difference between power consumptions of the logic functions AND, OR, etc. The current drawn from the supply increases with increasing rate of data changes and is mainly due to charging and discharging the output capacitance. Philvoids (talk) 13:14, 5 August 2024 (UTC)[reply]
Thanks, I read about the clock rate or clock speed. I think it adds to the logic your reply. Afrazer123 (talk) 21:20, 6 August 2024 (UTC)[reply]
For actual numbers, old digital circuits that I used back in the 70s implemented both and and or with components. The drop is 0.3v pretty much any way you do it for both an or and an and gate. You are either dropping 0.3v across a diode or 0.3v across a transistor. I doubt any modern circuits have that much drop and I'm certain it is still very similar between diodes and transistors. 12.116.29.106 (talk) 12:20, 6 August 2024 (UTC)[reply]
Modern CMOS gates don't have diodes, nor do the MOS transistors have saturation voltage drops of the sort you're talking about. Dicklyon (talk) 03:53, 8 August 2024 (UTC)[reply]
"As of now" means CMOS, one would presume. The "elementary" gates are inverting: NAND and NOR. One has a marginally better energy per operation than the other, but not so much that you'd worry about it. And it depends on your logic conventions, which you can change in midstream if that helps. Back in the NMOS days, with positive logic (higher voltage representing logic 1), the NOR gate had a signficantly better energy per operation than the NAND gate, due to the use of N-type switches to ground and passive pullups, with switches being in series for NAND and parallel for NOR. And both were better than PMOS gates, due to the higher mobility of electrons compared to holes. But CMOS is more nearly symmetric (at least in the most typical logic gate circuits). I think the one with series N-type and parallel P-type devices is a bit better (that's a NAND for positive logic); but I wouldn't swear to it. Choosing NAND vs NOR is not a big deal compared to using AOI (AND-OR-invert) gates and optimizations at other levels. One such other technique is the use of dynamic logic, which gets more complicated to reason about, but still probably the gates with parallel switches are a bit more energy efficient than the ones with series switches. But other considerations dominate. And I haven't seen anyone use Karnaugh maps in the last 4 decades; are they still teaching those? Dicklyon (talk) 03:53, 8 August 2024 (UTC)[reply]
Around 2003, I was taught about Karnaugh maps at a community college. In 2004, I submitted such information on an application to a summer internship. My task there was to program the Fourier transform using machine language. A spreadsheet called Excel, with
its built-in trig. functions, was used too. "The Fourier transform relates the time domain, in red, with a function in the domain of the frequency, in blue." An illustration of this is shown in Wikipedia under Fourier transform. There were constraints because of the fact that a negative to positive infinity summation couldn't be done because of the computer's counting limitations as far as I know. I have brought up the question of a complex analysis. Thanks Afrazer123 (talk) 23:01, 8 August 2024 (UTC)[reply]
Well, it's not related to Karnaugh maps. And yes computers are not well matched to infinite problems like the Fourier transform, but are good for the discrete Fourier transform, especially implemented via the fast Fourier transform. Maybe that's what you were intended to do for your internship project. I did that once, in 1974, using assembly code on an HP 2116B. Not hard. Dicklyon (talk) 04:09, 10 August 2024 (UTC)[reply]
I would say that my 'logic conventions' regarding the NAND and NOR gates come from the source code or the assembly code with registers. The CPU's main memory was sufficient. It was intriguing reading about CMOS too. "CMOS is more nearly symmetric (at least in the most typical logic gate circuits)." One practical application of the DFT is the "daily temperature readings, sampled over a finite time interval (often defined by a window function)." Window functions are typically "bell shaped" curves. Finally, I delved into my summer task and thanks for your very informative reply. Afrazer123 (talk) 06:26, 10 August 2024 (UTC)[reply]
You can't tell anything about the underlying logic/voltage conventions from source code, or from logic diagrams even. You need to see the circuits that implement the gates, or have an explicit statement of the convention (e.g. high voltage = 1, low voltage = 0, which is most typical). And you definitely need to see circuits, and probably also some of their parameters, to begin to answer your original question; though the answer "about the same" is probably correct enough in general. Dicklyon (talk) 14:40, 10 August 2024 (UTC)[reply]
By the way, the Apollo Guidance Computer#Logic hardware used only one type of small-scale chip: a dual 3-input NOR. Dicklyon (talk) 04:03, 8 August 2024 (UTC)[reply]

August 5

[edit]

Downloading MediaWiki for home use

[edit]

I usually use an LG Gram 1 TB SSD laptop with 32 GB RAM. I've got about 600 MB free. I would like to be able to create articles on my own PC instead of using my personal userspace. I haven't gotten any response at Talk:MediaWiki, so I'm asking here. I have always created my articles in my userspace, but have recently experienced harassment and stalking for doing so, and I'm tired of it.

Can MediaWiki be downloaded for personal use at home, with no one else accessing it? If so, what would be the requirements to get it functioning properly? Would I need to download other software, or have a huge hard drive? I guess I'm hoping for a word processor type program that works like editing here.

If it won't work in that way, is there another software program that uses the same wikimarkup we use here? I'd like to be able to create content on my PC, move it to userspace, maybe then to draftspace, and finally to mainspace. -- Valjean (talk) (PING me) 02:22, 5 August 2024 (UTC)[reply]

You can download MediaWiki software yourself, as per Manual:Installation requirements. You will require both a database and a way to serve the webpages to the user, however, as well as PHP. Doing this yourself would require a separate server or machine, although you can use a webhost and just have them install the required dependencies.
Actual storage and memory requirements are quite low, so storage wouldn't be a terrible issue, but depending on how much you would use it, that can fill up relatively quickly.
Alternatively, there are several wiki softwares that aren't all that great for public use, but are good for what you want to do (personal, internal use). Something like wikijs or dokuwiki would be better suited for this. SmittenGalaxy | talk! 06:41, 5 August 2024 (UTC)[reply]
It sounds a bit complicated for this old man. Is there anything simpler and similar to a word processor program like Microsoft Word (and I'm old enough to have used WordPerfect and directly edited its code) that uses our wiki markup? It's okay if there are red links because I wouldn't be hosting all of Wikipedia. -- Valjean (talk) (PING me) 14:54, 5 August 2024 (UTC)[reply]
There is Extension:Word2MedaWiki that can take Microsoft Word and translate it to MediaWiki markup, although it is quite old and unmaintained, and as such I don't believe works on newer 64-bit versions of Word. Microsoft did release this addon for Word 2007 and 2010 (and 2013 with registry editing).
Aside from those, LibreOffice and OpenOffice (stated below as well) are standalone editors that can save directly as MediaWiki. See Help:WordToWiki as well. SmittenGalaxy | talk! 21:51, 5 August 2024 (UTC)[reply]
OpenOffice has a wiki extension so you can write articles in a word processor and save them in wiki markup. I personally do not feel that it procudes optimal markup, but it works. 75.136.148.8 (talk) 18:41, 5 August 2024 (UTC)[reply]
  • Yes, you can do this. I've been doing it for years: on my own servers, on my laptop (often on client sites) and also internet hosted MediaWikis, so that I can access them elsewhere. They're all fairly easy.
One way to do this is with a hosting company (or the free tier on AWS) that offers a 'one click install' of MediaWiki. This is very simple.
However installing complex software on a public-accessible website always needs care and competence. Even if you're just locking it down as an extranet, you still need to lock it carefully. Andy Dingley (talk) 23:18, 5 August 2024 (UTC)[reply]
Or else you do it the classic and well-documented route of installing a Linux (or Xampp under Windows), then Apache web server, then PHP, then MediaWiki, then some MediaWiki extensions, then the Wikipedia content (mostly some templates) needed to emulate the Wikipedia experience. Because it's not so simply bundled, it's the Wikipedia templates that might take the most time to do. You can also install Lua (not all installs do this as standard) which many Wikipedia templates use instead of MediaWiki template code (which is hateful stuff anyway).
I use this every day. It's my basic desktop organisation tool. I also used to use it for writing articles for here, back when that was still worth doing. Andy Dingley (talk) 22:32, 5 August 2024 (UTC)[reply]
I think this highlights a key point not really discussed until now. If you're hoping to develop content for en.wikipedia and you want to be able to actually preview locally what you've developed, it's quite likely it's not just a basic wiki install you need but key templates as well. I mean even if you don't care about infoboxes and some stuff so can ignore these, you're probably using templates for referencing and maybe some other formatting things. A quick look at one example of what I guess is a sandbox [3] shows plenty of templates e.g. for referencing, block quotes, and other things. Note also that unless you have a local mirror of all content, or some other more complicated set-up, all interwiki links will generally be red so it might be difficult to notice if you've made a mistake. I'm sure there are ways to set it up to just obtain the templates from en.wikipedia but I suspect this is complicated and you might need some caching setup or API access or risk excessively downloads from the web frontend that server admins aren't happy. Possibly a better option might be to just write and store your content locally, with something capable of highlighting wikisyntax and/or providing shortcuts if that's what you want, and then preview online. This does mean you need an internet connection whenever you preview. (In theory you could make this fairly automated so you have an editing syntax and are able to save locally, except when you preview it uses en.wikipedia, but I'm not sure if there's an easy way to set that up.) Nil Einne (talk) 00:16, 6 August 2024 (UTC)[reply]
Ideally, volunteer editors (all of us!) should be allowed to use their "personal userspace" for article creation:
"If you would like to draft a new article, Help:Userspace draft provides a standard template and useful guidance to help you create a draft in your userspace, and the Article Wizard can walk you through all stages of creating an article with the option to save as a userspace draft too. You can use the template {{userspace draft}} to tag a userspace draft if it is not automatically done for you."(source)
Does it come with an RfD tag already prepended so nobody else has to slap it on the moment you publish the article? 75.136.148.8 (talk) 11:52, 7 August 2024 (UTC)[reply]
Those are the rules here, after all, but some don't approve of that practice when it's a topic they don't want to see here, as I have found out. If it's uncontroversial content, then they don't cause problems. Hmmm.... -- Valjean (talk) (PING me) 18:40, 6 August 2024 (UTC)[reply]
Installing a whole MediaWiki installation may be overkill if the primary use case is working on article drafts. There are a variety of plain text editors with extensions/plugins for highlighting and formatting MediaWiki markup. This also reduces the risk of losing your work if your web browser crashes or drops its cache. ClaudineChionh (she/her · talk · contribs · email) 12:06, 7 August 2024 (UTC)[reply]

August 7

[edit]

Single versus Multiple Exit Points in a Function

[edit]

When I was in school back in the 90s, we were taught that a function should have only one exit point. Do they still teach this? I'm asking because I'm coming across a lot of code when doing code reviews where the developer has multiple exit points and I'm wondering if I should ask them to change their code to have one exit point or let it slide. For example, I often see code like this:

        private static bool IsBeatle1(string name)
        {
            if (name == "Paul") 
            {
                return true;
            }
            if (name == "John")
            {
                return true;
            }
            if (name == "George")
            {
                return true;
            }
            if (name == "Ringo")
            {
                return true;
            }
            return false;
        }

Personally, this is how I would have written this code:

        static bool IsBeatle2(string name)
        {
            bool isBeatle = false;
            if (name == "Paul")
            {
                isBeatle = true;
            }
            else if (name == "John")
            {
                isBeatle = true;
            }
            else if (name == "George")
            {
                isBeatle = true;
            }
            else if (name == "Ringo")
            {
                isBeatle = true;
            }
            return isBeatle;
        }

So, my question is two fold:

  1. Do they still teach in school that a function should have a single exit point?
  2. When doing code reviews, should I ask the developer to rewrite their code to have one single exit point? Yes, I realize that this second question is a value judgement but I'm OK with hearing other people's opinions.

Pealarther (talk) 11:21, 7 August 2024 (UTC)[reply]

If there was only one school with only one instructor, your answer could be "yes" or "no." However, there are millions of schools with millions of instructors. So, the only correct answer is "both." Yielding functions and scripting languages have changed what is considered optimal when writing functions. So, it comes down to what the function does, what language is being used, and what the instructor feels like teaching. 75.136.148.8 (talk) 11:49, 7 August 2024 (UTC)[reply]
  • Many things taught in the '90s, and especially the '80s, are now realised to be unrealistic.
There is no reason why functions should only have one exit point. What's important is that some boundary exists somewhere where you can make absolute statements about what happens when crossing that boundary. Such boundaries are commonly functions, but it's broader than that too. In this case, we can define a contract, 'Leaving this function will always return a Boolean, set correctly for isBeatleness.' That's sufficient. To then mash that into this type of simplistic 'Only call return once, even within a trivial function' is pointless and wasteful.
You might also look at 'Pythonic' code, the idiomatic style of coding optimised for use in Python. This raises exceptions quite generously, see Python syntax and semantics#Exceptions. The boundary here is outside the function, but instead the scope of the try...except block. In Pythonic code, the exception handler that catches the exception might be a very long way away. Andy Dingley (talk) 13:58, 7 August 2024 (UTC)[reply]
Yes, it was accepted wisdom (at least in academic teaching of programming) in the 1980s, and Pascal (the main teaching language in a lot of academic settings) effectively enforced it (at least in the academic versions people taught - I rather think Turbo Pascal, which was always more pragmatic, will not enforce this). But it leads to some horrible patterns:
  1. checking inputs and other preconditions are acceptable leads to deep nested ifs, with the "core" of the function many deep.
  2. "result" accumulation - especially where "break" is also prohibited (with the same reasoning), where the function has "finished" its calculation, but has to set a result variable, which then trickles down to the eventual real return at the end of the function. This (and the break prohibition) leads to fragile "are we done yet" booleans.
So the restriction was an attempt to avoid bad code, but in doing so produced lots of different kinds of bad, unreadable, fragile code. So it's a daft restriction.
I've no idea what academics teach now, and frankly what universities (often in toy or abstract cases) do is seldom what industry does. So let's look at what industry does:
  • Code Complete reads "Minimize the number of returns in each routine. It's harder to understand a routine when, reading it at the bottom, you're unaware of the possibility that it returned some-where above. For that reason, use returns judiciously—only when they improve readability."
  • Neither the CoreCPPGuidelines nor Google's C++ styleguide seems to say anything on the topic
  • Notable codebases like Chrome, the Linux Kernel, PDF.js, IdTech3, MySQL, LLVM, and gcc all frequently use multiple return paths.
That doesn't mean "just return willy-nilly wherever", as that can be as bad - Code Complete gives smart advice. But it's a bad rule, which won't produce better code in real circumstances, and will frequently produce worse code. "Write good code" can't be boiled down to such simple proscriptions. -- Finlay McWalter··–·Talk 14:13, 7 August 2024 (UTC)[reply]
I tend to agree with the OP. However, the example of multiple exits he gives is not that bad because they are all right together. It would be worse practice to have four exits randomly spread out in a routine. Bubba73 You talkin' to me? 04:08, 8 August 2024 (UTC)[reply]
The underlying rationale for the directive to have a single exit is to make it easier to ascertain that a required relationship between the arguments and the return value holds, as well as (for functions that may have side effects) that a required postcondition holds – possibly involving the return value. If the text of the body of a function fits on a single screen, forcing a single exit will usually make the code less readable. As long as it is easy to find all exits – much easier with on-line editors than with printouts of code on punch cards as was common before the eighties – the directive no longer fulfills a clear purpose.  --Lambiam 08:14, 8 August 2024 (UTC)[reply]

How are one-time passwords secure?

[edit]

To log into my Mailchimp account, I need a password plus a one-time code I either read off the Google Authenticator app on my Samsung tablet, or off the iCloud keychain. The two sources always give the same code, and to set them up, I had to enter a 16-letter code. My question is: how does any of this increase security? To get the one-time code, all a hacker needs is the 16-letter code used, and they're good to go. It just seems like a second password but more complicated. I thought the idea of one-time codes was that it would be something I know (password) and something I have (my tablet). But in fact the something I have is only useful because of the 16-letter code (something else I know). Amisom (talk) 15:48, 7 August 2024 (UTC)[reply]

If you know the secret key (the code you started with), the current time, and the algorithm, you can produce the OTP key at any point in time. 75.136.148.8 (talk) 17:21, 7 August 2024 (UTC)[reply]
Or indeed, as I said, all you need is the secret key and a widely available app like Google Authenticator. So my question is how and why that is more secure than a password alone. Amisom (talk) 17:23, 7 August 2024 (UTC)[reply]
The issue is if your communication is being intercepted, someone is looking over your shoulder, or a bug in the browser state means the text you entered (which should be forgotten immediately) is retained in memory, and a wrongdoer can recover it later. If you were sending a shared secret (e.g. a password), now the enemy has your password. If all you enter is the OTP, which expires in a minute or two, the enemy has only seconds to use it. As the OTP is generated from the 80-bit shared secret with a one-way function (in this case, a cryptographic hash function), they can't reverse OTP to recreate the 80-bit secret. The 80-bit shared secret key should not be your regular password, nor derived from it. Typically, when setting up a HOTP entry in Authenticator, the service (e.g. Mailchimp) should generate an 80 bit random key and usually shows this on screen with a QR code (for Google Authenticator to read). After that, the 80-bit shared secret is never passed between the two parties. -- Finlay McWalter··–·Talk 18:02, 7 August 2024 (UTC)[reply]
Technically, as the concern mentions, if I had thousands of computers all attempting to authenticate at the same time, I could have each one attempt thousands of possible OTP keys based on trying every possible original seed value used to set up the OTP. If one works, I can continue using it without the extra overhead of trying millions of combinations. But, even if only 16 hex values were used in the random initial seed, there would be over 1,000,000,000,000 possible values to try. As mentioned above, you can't intercept this as you can with a password. It is not transmitted anywhere. The user never types it into anything after setting up the OTP. But, the concern is not completely without merit. It is possible that someone could randomly pick out the original value used to set up the whole thing and then have their own copy of it to use. It comes down to the old analogy of you can spend billions to build a system to work out a person's OTP and hack into their bank account or you can spend $5 on a good hammer and force them to give you their phone so you can use it to log in easily. 75.136.148.8 (talk) 19:53, 7 August 2024 (UTC)[reply]
There are 1616 different hexadecimal strings of length 16, which is more than 1.8×1019. This is a whole lot more than 1,000,000,000,000.  --Lambiam 21:51, 7 August 2024 (UTC)[reply]
The comment above that mentions an 80-bit shared secret. Assuming 8 bits per character, that is 10 characters, not 16. Regardless, it is correct to state that it is not likely someone will brute force an OTP secret easily. 12.116.29.106 (talk) 12:40, 8 August 2024 (UTC)[reply]
I was reacting to the comment immediately above my reaction, which stated,
"But, even if only 16 hex values were used in the random initial seed, there would be over 1,000,000,000,000 possible values to try."
If "16" was a typo for "10", the hexadecimal strings of length 10 are more than 1.2×1024 in number, still more than 1,000,000,000,000 by over a factor of 1,000,000,000,000. These days you can buy a 7168-core GPU with a clock speed of 2.4 GHz, so trying 1,000,000,000,000 values is not an obvious impossibility. Five-character random passwords are not safe against brute force.  --Lambiam 22:16, 8 August 2024 (UTC)[reply]

August 8

[edit]

A new test for comparing human intelligence with Artificial intelligence, after the Turing test has apparently been broken.

[edit]

1. Will AI discover Newton's law of universal gravitation (along with Newton's laws of motion), if all that we allow AI to know, is only what all physisists (including Copernicus and Kepler) had already known before Newton found his laws?

2. Will AI discover the Einstein field equations, if all that we allow AI to know, is only what all physicists had already known before Einstein found his field equations?

3. Will AI discover Gödel's incompleteness theorems, if all that we allow AI to know, is only what all mathematicians had already known before Gödel found his incompleteness theorems?

4. Will AI discover Cantor's theorem (along with ZF axioms), if all that we allow AI to know, is only what all mathematicians had already known before Cantor found his theorem?

5. Will AI discover the Pythagorean theorem (along with Euclidean axioms), if all that we allow AI to know, is only what all mathematicians had already known before pythagoras found his theorem?

If the answer to these questions is negative (as I guess), then may re-dicovering those theorems by any given intelligent system - be suggested as a better sufficient condition for considering that system as having human intelligence, after the Turing test has apparently been broken?

HOTmag (talk) 18:08, 8 August 2024 (UTC)[reply]

Most humans alive could not solve those tests, yet we consider them intelligent. Aren't those tests reductive? Isn't it like testing intelligence by chess playing? We consider some chess machines very good at chess, but not "intelligent". --Error (talk) 18:31, 8 August 2024 (UTC)[reply]
According to my suggestion, the ability to solve the problems I've suggested, will not be considered as a necessary condition, but only as a sufficient condition. HOTmag (talk) 18:39, 8 August 2024 (UTC)[reply]
It is impossible to test whether something will happen if it may never happen. The only possible decisive outcome is that it does happen, and then we can say in retrospect that it was going to happen. It does not make sense to expect the answer to be negative.  --Lambiam 21:36, 8 August 2024 (UTC)[reply]
I propose that we ask the AI to find a testable theory that is consistent with both quantum mechanics and general relativity (in the sense that either emerges as a limit). This has two advantages. (1) We do not need to limit the AI's knowledge to some date in the past. (2) If it succeeds, we have something new, not something that was known already. Alternatively, ask it to solve one of the six remaining Millennium Prize Problems. Or all six + quantum gravity.  --Lambiam 21:49, 8 August 2024 (UTC)[reply]
I suspect that any results from the test proposed in the first post would be impossible to verify. AI needs data to train on: lots of it. Where exactly would one find data on "what all physisists (including Copernicus and Kepler) had already known before Newton found his laws" in the necessary quantity, while ensuring that it wasn't 'contaminated' by later knowledge? AndyTheGrump (talk) 21:56, 8 August 2024 (UTC)n[reply]
The same problem plagues the Pythagorean theorem, which most likely was dicovered independently multiple times before Pythagoras lived (see Pythagorean theorem § History), while it is not known with any degree of certainty that it was known to Pythagoras himself. Euclid does not ascribe the theorem to anyone in his Elements (Book I, Proposition 47).[4].  --Lambiam 22:34, 8 August 2024 (UTC)[reply]
Where tests based on intellectual tasks fail, we might need to start relying on more physical tests.
Humans take less energy to do the same intellectual tasks (I've mainly got inference in mind) as AI. While in the future it might not prove that I am a biological being, measuring my energy consumption to perform the same tasks and comparing it with that of an AI trained to do general tasks could be a way forward.
For bio-brains, training to do inference tasks is based on millions of years of evolution, the energy for training might be more than for LLMs, I don't know - but it is already spent and optimised for day-to-day efficiency. I think natural selection is an inefficient and wasteful way to train a system, but it has resulted in some very efficient inference machines.... Komonzia (talk) 04:31, 9 August 2024 (UTC)[reply]

I think all of you are missing my point. You are claiming from a practical point of view, while I'm asking from a theoretical point of view, which may become practical in a thousand years, or will never become practical.

I'll try to be more clear now:

1. If we let our sophisticated software be aware of a given finite system of axioms, and then ask our software to prove a given theorem of that axiom, I guess our sophisticated software will probably do it (regardless of the time needed to do it).

2. Now let's assume, that X was the first (person in history) to discover and prove the Pythagorean theorem. As we know, it had happened long before Euclide phrased his well known 6 axioms of Euclidean Geometry, but X had discovered and proved the Pythagorean theorem, whether by implicitly relying on the Euclidean axioms, or in any other way. Let's also assume, theoretically speaking, that we could collect all of the works in mathematics that had been written before X discovered and proved the Pythagorean theorem. Let's also assume, theoretically speaking, that we could let our AI software be only aware - of this mathematical collection we are holding (i.e. not of any other mathematical info discovered later). Since it does not include the Euclidean axioms, then what will our AI software answer if we ask it whether the well-formed formula reflecting the Pythagorean theorem is necessarily true, for every "right triangle" - according to what the mathematicians who preceded X and who wrote those works meant by "right triangle"? Alternatively, I'm asking whether (under the conditions mentioned above about what the AI is allowed to know in advance), AI can discover the Pythagorean theorem, along with the Euclidean axioms.

3. Note that I'm asking this questions (and all of the other questions in my original post, about Newton and Einstein and Gödel and Cantor), from a theoretical point of view.

4. The task of turning this theoretical question into a practical question, is technical only. Maybe, in a hundred years (or a thousand years) we will have the historical collection I was talking about, so the theoretical question will become a practical one.

5. Anyways, if the answer to my question is negative (as I guess), then may this task of re-dicovering those theorems by any given intelligent system - be regarded as a better sufficient condition for considering that system as having human intelligence? Again, as of now I'm only asking this question from a theoretical viewpoint, bearing in mind that it may become practical in some years. HOTmag (talk) 08:49, 9 August 2024 (UTC)[reply]

We can give pointers to what scientists and philosophers have written about possible replacements or refinements of the Turing test, but this thread is turning unto a request for opinions and debate, which is not what the Wikipedia Reference desk is for.  --Lambiam 10:58, 9 August 2024 (UTC)[reply]
My question is a yes/no question. HOTmag (talk) 11:01, 9 August 2024 (UTC)[reply]
OK. Yes. At some point in time there will be a case where a computing device or system of some kind will discover some proof of something. No. That isn't going to happen today. What you are truly asking is for an opinion about when it will happen, but you haven't narrowed down your question to that point yet. It is an opinion request because nobody knows the future. The suggestion is to narrow your question to ask for references published on the topic, not for a request about what will happen in the future. 75.136.148.8 (talk) 16:06, 9 August 2024 (UTC)[reply]
Again, it's not a question "about when it will happen". It's a yes-no question: "may this task of re-dicovering those theorems by any given intelligent system - be regarded as a better sufficient condition for considering that system as having human intelligence?". As I said, It's a yes-no question. Now you answer "Yes" (at the beginning of your answer). Ok, so if I ignore the rest of your answer, then I thank you for the beginning of your answer. HOTmag (talk) 16:23, 9 August 2024 (UTC)[reply]
Your extreme verbosity is getting in the way of your question. Now that you've simplified it to a single question and not a diatribe about AI math proofs, the answer is more obvious. Turing test is a test of mimicry, not a test of intelligence. So, replacing it with a different test to see if it is "better" does not really make sense. Computer programs (that could be called AI) have already taken axioms and produced proofs. They are not tests of intelligence either. They are tests of pattern matching. 75.136.148.8 (talk) 16:37, 9 August 2024 (UTC)[reply]
People with brains discovered it. A computer with a simulated brain will be able to rediscover it. Not necessarily with Generic Pre-trained Transformer algorithms, because the current generation is only trained to deceive us into thinking it can do things involving language, conversation, etc. But if a computer sufficiently can simulate a brain, no, there is nothing stopping it from following the same process that a human has done, possibly better or more correctly. In my opinion, there is no deeper soul than that which our brains trick us into thinking we have.
Note: even if simulating a brain is not possible (with every chemical reaction and neuron, if that ends up being needed), then there is nothing theoretically stopping it from being capable of growing a brain and using that -- or utilizing an existing brain. See wetware computer. Komonzia (talk) 16:52, 9 August 2024 (UTC)[reply]
Ask an AI to devise some much better test than the pitiful bunch above. NadVolum (talk) 18:19, 9 August 2024 (UTC)[reply]

The list of 5 human discoveries that the OP proposes as new "litmus tests" of capability of a computer based AI have in common that they are all special cases of general work (e.g. Pythagoras' theorem is the case of Law of cosines reduced when an angle of a triangle is a right angle so its cosine is zero) that subsequent human commentators regard as notably elegant, insightful and instructive or have been shown later to be historically important. Each discovery statement can be elicited by posing the appropriate Leading question (such as "Is it true that...."). AI has not yet impressed us with any independant senses of elegance, insight, scholarship or historical contextualization, nor is AI encouraged to pose intelligent leading questions. AI therefore fails the OP's tests. If this question was an attempt to reduce investigative human thought to a sterile coded algorithm then that attempt also fails. Philvoids (talk) 22:54, 9 August 2024 (UTC)[reply]

There is one case I know of where AI was tasked with improving an algorithm heavily looked at by humans already, and did so: https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/
That case is especially remarkable because it wasn't trained on code samples that would have led it to this solution. However as User:75.136.148.8 noted above, it's not necessarily a marker of intelligence - the method used to devise the optimization is the optimization equivalent of fuzz testing. Komonzia (talk) 08:55, 10 August 2024 (UTC)[reply]

August 9

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Should Similarweb be cited to report web traffic rankings on Wikipedia?

[edit]

I added this to the Similarweb talk page, but I discovered it doesn't belong there & I believe the question is better posted here. The original question was posed on https://en.wikipedia.org/wiki/Talk:Similarweb#Should_Similarweb_be_cited_to_report_web_traffic_rankings_on_Wikipedia? & contains further discussion of the subject.

(I apologize if I've used the incorrect template. If so, please replace it with the appropriate one.)

This topic came up on Talk:GunBroker.com where I have a COI, and merits further discussion by the community at large, given the large number of pages that could be affected (to date, 166 pages). It is not my intention to engage in Wikipedia:Edit warring, but to work toward Achieving consensus.

User:Lightoil stated on 4 May 2023 that "Similarweb may be used if it is considered a reliable source."

On 24 August 2023, User:Spintendo implemented a COI edit request to cite Similarweb web traffic data.

On 26 September 2023, User:Graywalls removed the cited data and maintains that "Similarweb.com is not really a data source. [...] Similarweb is just a data aggregation."

Graywall and I have not been able to reach consensus on this matter, so it seems opening up the topic is warranted.

Should Similarweb be cited to report web traffic rankings on Wikipedia?

Similarweb is used to report rankings all over Wikipedia, most notably the entire List of most-visited websites page, which relies solely on Similarweb as the source.

There are at least 165 other Wikipedia pages (to date) relating to website traffic for entities like Facebook, Weather Underground (weather service), WebMD, and numerous international entities. Other notable pages using these metrics include List of most popular Android apps, List of employment websites (which sorts the data based on Similarweb traffic rank), and List of online video platforms, to name a few.

The question is whether or not Similarweb rankings are a valid source, as it is common practice to use them as an exclusive source on Wikipedia pages (as evidenced by the above links and articles). Since data from sources like Alexa Internet has been discontinued, I'm at a loss to find other secondary sources for website traffic data that could be used on any pages. I would welcome other reliable secondary sources if any could be provided. LoVeloDogs (talk) 21:03, 16 October 2023 (UTC)[reply]

I think to start with it's best for someone to establish why a data aggregator cannot be used as a source on Wikipedia. Aggregation does not make data less reliable, it just means you're taking data from different places and putting it into one place. An ETL pipeline usually involves aggregation. That makes data more usable, normally, not less reliable. Komonzia (talk) 18:53, 9 August 2024 (UTC)[reply]
In my opinion starting a discussion on the Reliable Sources Noticeboard would be best to settle the issue on whether Similarweb is a reliable source. Lightoil (talk) 20:13, 9 August 2024 (UTC)[reply]
Indeed. The Reference desk is not the right venue for resolving issues concerning Wikipedia policy.  --Lambiam 20:47, 9 August 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

August 10

[edit]

Printer Makes Power Supply Beep ?

[edit]

I have a Dell Inspiron 3910 running Windows 11. That isn't the issue, because I am not asking about it. I have it connected to a Schneider APC battery backup power supply. I also have a Canon ImageClass D570 all-in-one printer and copier. If I have the Canon printer plugged into one of the six battery-power sockets of the power supply, and I print a page, there is usually (not always, but usually) a beeping sound that I think is coming from the battery backup. What is causing this? Does it mean that it draining power from the battery at a faster rate than is preferred? I also have two sockets on the power supply that are surge-protected but not on battery power. If I move the printer plug to one of these, it no longer beeps. However, if there is a transient loss of power, as happened a few times in the past two days during Debby, the printer hums when normal power resumes, because it is powering back on. The computer continued normal operation during these transient losses of power, and that is what the power supply is for.

What is causing the beeping? Is my hypothesis plausible? Is there any reason why I shouldn't reconnect the printer to surge protection only without battery backup? The beeping is annoying.

Robert McClenon (talk) 19:43, 10 August 2024 (UTC)[reply]

Yes, laser printers typically draw a huge current when they get ready to print, to warm up the fuser. Move it to the non-battery side. Dicklyon (talk) 23:21, 10 August 2024 (UTC)[reply]
Thank you for that explanation, User:Dicklyon. That answers one question and leaves another to be asked and answered. Obviously the initial draw of a laser printer heating the fuser is less than 15 amperes, and enough less than 15 amperes so that a desktop computer can also be running normally while the laser printer is heating the fuser. So that would seem to mean that the power supply goes into alarm when there is a current draw that both the battery and the line current can handle. So why would the power supply vendor build in that alarm? Robert McClenon (talk) 01:51, 12 August 2024 (UTC)[reply]
This study found startup transients exceeding 50 A for several printers, giving inverters a hard time (like the one in your UPS). The beep probably means it's unable to keep the voltage up to nominal. Dicklyon (talk) 02:16, 12 August 2024 (UTC)[reply]
Also, your manual has cautions on p. 5 that implies it will suck down a lot of current:

"When connecting power

  • Do not connect the machine to an uninterruptible power source.
  • If plugging this machine into an AC power outlet with multiple sockets, do not use the remaining sockets to

connect other devices.

  • Do not connect the power cord into the auxiliary outlet on a computer
(my bold). Dicklyon (talk) 02:24, 12 August 2024 (UTC)[reply]

August 12

[edit]