Jump to content

Wikipedia:Reference desk/Archives/Computing/2015 November 18

From Wikipedia, the free encyclopedia
Computing desk
< November 17 << Oct | November | Dec >> November 19 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 18

[edit]

Computer systems that handle leap seconds honestly?

[edit]

As noted in our article, Unix time (the well-known seconds-since-1970 scheme) is "no[t] a true representation of UTC". This is because it makes no formal provision for leap seconds, and in fact some rather dreadful kludges are necessary at midnight on leap second days in order to handle leap seconds at all (and there have also been some rather dreadful bugs).

My question is, does anyone know of an operating system (mainstream or not) that is able to handle UTC leap seconds up-front and properly? By "up-front and properly" I mean that

  1. the kernel-level clock runs for 86,401 true seconds on a leap-second day (analogously to the way a true and proper calendar runs for 29 whole days in February during leap years), without having to do anything kludgey
  2. a user-level program that prints the time of day (perhaps after fetching a time_t value from the OS, perhaps after using a C library function like ctime or localtime) will actually print a time like "23:59:60" (as illustrated on our leap second page)

In terms of the well-known, mainstream operating systems, as far as I know, all versions of Unix, Linux, and Microsoft Windows fail both of these tests. (I'm not sure about MacOS.) —Steve Summit (talk) 01:42, 18 November 2015 (UTC)[reply]

The tz database has time zones under right/ that count leap seconds, and I think that you can get a value of 60 in tm_sec (which is permitted by POSIX) if you use one of those. But actually using them appears to violate POSIX. -- BenRG (talk) 04:14, 18 November 2015 (UTC)[reply]
I believe the traditional Operating systems of the IBM zEnterprise System (the successor of System 370), such as z/OS and z/VM, correctly handle leap seconds in the operating system, if set up to do so; the hardware certainly supports correct handling. According to this z/OS spins during the leap second, so it isn't full support. I don't know about the open operating systems that have been adapted to run on zEnterprise System. I also don't know the extent to which the leap second support has filtered all the way down to application programming languages. I did find a PL/1 manual online (a language more or less exclusive to IBM); I'll report back on whether that seems to support leap seconds. Jc3s5h (talk) 15:22, 18 November 2015 (UTC)[reply]
Thanks! (Just the sort of thing I'm looking for.) —Steve Summit (talk) 15:52, 18 November 2015 (UTC)[reply]
If that's all you want (spinning during the leap second rather than following the UTC spec), any POSIX system will act the same way if the system it runs on handles leap seconds using kernel clock discipline. See [ http://www.madore.org/~david/computers/unix-leap-seconds.html ]. — Preceding unsigned comment added by Guy Macon (talkcontribs) 16:04, 18 November 2015 (UTC)[reply]
Guy, please, if you don't know what I'm looking for or don't want to help, then don't. If you think I'm attacking Posix I'm not; if you think I'm a dangerous heretic for even asking these questions then, please, take it to my talk page.
Although the IBM systems Jc3s5h cited may not pass my second test, it looks like they do pass my first, so they are the sort of (not necessarily the exact) thing I'm looking for. —Steve Summit (talk) 16:14, 18 November 2015 (UTC)[reply]
Why the personal comments? I am giving you correct technical information and attempting to correct what appears to be a misunderstanding on your part. Both IBM and POSIX count up from an epoch, ignoring leap seconds. The fact that one counts seconds since 1970 and the other counts 0.244 ns units since 1900 is immaterial. Both systems are equal as far as meeting your tests, but only if the POSIX system uses kernel clock discipline. If you have some sort of problem with POSIX that makes you insist that two systems act differently when they actually act the same, just let me know and I won't bother posting any further correct technical information in response to your questions. --Guy Macon (talk) 16:25, 18 November 2015 (UTC)[reply]
If you read the reference Jc3s5h posted, the implication (though I can't be 100% sure of this) is that the systems in question are keeping TAI internally. They may not maintain a complete history of when historical leap seconds occurred, but they do maintain a current notion of the TAI-UTC offset, so that this offset can be applied when returning UTC timestamps to user processes. So, actually, it appears to be a rather different solution, not "the same" as Posix at all (although it may end up appearing that way to a user process, after all). —Steve Summit (talk) 18:02, 18 November 2015 (UTC)[reply]
Perusal of the manual I mentioned earlier indicates that the only time and date format with a thorough description is the Lilian date. The given example makes it evident that leap seconds are ignored. The other source I mentioned, http://www-01.ibm.com/support/docview.wss?uid=tss1wp102081&aid=1 has the person setting up the system enter the leap second offset at the time of setup, and schedule leap second insertions thereafter. So there is no concept of maintaining a history of all the leap seconds that have ever happened. So it appears that in z/OS supporting leap seconds means not crashing. Historical leap seconds are treated as if they did not exist. I'm not sure the extent to which the most recent leap second is supported; I wonder if there is any function that would report June 30, 2015, as 61 seconds long. Jc3s5h (talk) 16:12, 18 November 2015 (UTC)[reply]
Ah, well. Thanks much for your research.
To clarify (for people like Guy who seem to misunderstand what I'm asking): I am not looking for an out-of-the-box operating system to install on my PC at home that handles leap seconds to my satisfaction. I'm looking for history: how have other systems handled leap seconds, how well did it work, and what can I learn from the attempts? If I have my own ideas for handling leap seconds, have they been thought of and tried before, and what was the experience? —Steve Summit (talk) 16:19, 18 November 2015 (UTC)[reply]
"People like Guy?" I find that comment to be rather insulting. I understood from the start that you are looking for a theoretical discussion about how other systems handled leap seconds. There was nothing unclear about your original question. This is a well-known problem in computer science.
To meet your tests for past dates you need to keep a record of all past leap seconds and provide a reliable method for updating that record when new leap seconds are announced. Once you have that it is simple arithmetic. To meet your tests for future dates is theoretically impossible. You cannot tell us how many seconds will elapse between midnight tonight and midnight on this day 50 years from today.
Many (most?) experts appear to agree that the best practical system is an incrementing counter that ignores leap seconds with the increment frequency modified by kernel clock discipline. (See reference I have given you twice already). --Guy Macon (talk) 16:49, 18 November 2015 (UTC)[reply]
Okay. So (ignoring what the experts appear to agree is best or practical), let's make a list:
  1. Keep a monotonically-incrementing counter, in units of seconds, since an epoch. Define 86400 secs/day. Tinker on leap-second days to taste.
  2. Keep a monotonically-incrementing counter, in units of seconds, since an epoch. Manually insert seconds into the counter on leap second days.
  3. ______ ______ ___________ __ _ _____ ________ ____.
  4. __ ______ __ ____ ___________ __ _ __ ____ _______.
Number 1, of course, is the Posix approach. #2 is (an approximation of?) the IBM mainframe approach mentioned above. Please help me fill in 3, 4, and perhaps more. —Steve Summit (talk) 17:12, 18 November 2015 (UTC)[reply]

It depends on what you mean by "Manually insert seconds into the counter" What was described for (some) IBM systems does not change the counter in any way - it remains monotonically-incrementing and every minute is exactly 60 seconds long, including minutes with leap seconds.

What you call the IBM mainframe approach (despite it being available on POSIX or pretty much any other time/date scheme) is not inserting seconds into the counter, but rather slowing down nor speeding up the rate at which the hardware clock increments until the ignores-leap-seconds counter agrees with UTC.

In other words, the definition of "second" used by the computer changes for some short period of time so that it no longer equals "the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom."

UTC, on the other hand, never varies from the caesium 133 standard, and every second in UTC without exception is exactly 9192631770 periods. UTC handles this be actually inserting or deleting seconds. In other words, instead of redefining how long a second is UTC redefines (for a short time) how many seconds are in a minute.

Both techniques have advantages and disadvantages, but my point is that they way IBM handles times and dates is essentially the same as the way POSIX handles times and dates. The only difference is when the counter starts and how big each increment is. I cannot emphasize this enough. Any computer that ignores leap seconds can be synchronized with UTC by slowing down or speeding up the hardware clock that feeds the time/date counter. The first two schemes listed are the same scheme.

Steve, could you please read the two references I provided and comment on them? I don't know for sure that you haven't read them, but I do know that you keep referring to what was described for IBM mainframes above without ever showing any indication that you are aware that my first reference explains the same technique with a lot more detail and explains exactly how some POSIX systems use the technique. --Guy Macon (talk) 00:52, 19 November 2015 (UTC)[reply]

Yes, I have read the two references you posted (and the one Jc3s5h posted). I thanked you for yours in this edit. Lemme see, I believe I still have all three open in various browser tabs.
My takeaways:
  • This one says that leap seconds don't matter that much to most people, and we should stop stressing about them so much. I'm intrigued to learn Linus thinks this, but I believe that they're a big enough problem that enough people are worried about (and our current solutions are still sufficiently inadequate) that more work is needed.
  • This one does a good job of summarizing most of the issues, including the fundamental inability of a Posix time_t to accurately represent a leap second. ("By reducing UTC to a single number, we have an ambiguity in the actual instant being referred".) This page also suggests that one piece of a better solution might involve reporting leap seconds to user space unambiguously (but mostly compatibly) using deliberately nonnormalized struct timespec values. This is a simply marvelous idea which I am delighted to learn of (and I have written an email to its author thanking him for it).
  • Finally, from this one I take away that these IBM mainframes keep something close to TAI internally, and maintain the current value of UTC-TAI (i.e., the current leap second count) in a kernel register so that they can add it in when they deliver UTC to user processes. I think (though you seem to disagree with me here) that this is different enough from the typical Posix scheme to be interesting. —Steve Summit (talk) 04:12, 19 November 2015 (UTC)[reply]

A bit more information from someone who posted to my talk page:

  • The "Future of UTC" colloquium has, naturally enough, dealt with many/most/all of the issues that have come up in this thread (and then some).
  • This page describes a set of techniques for synching your NTP server and the rest of your computers to GPS time, and then using a modification of the "right" tzinfo files (mentioned earlier in this thread by BenRG) to convert to UTC.

Steve Summit (talk) 05:21, 19 November 2015 (UTC)[reply]

Those are really good. Thanks! --Guy Macon (talk) 06:33, 19 November 2015 (UTC)[reply]

it's complicated...

[edit]
It is a lot more complicated than simply defining one possible behavior as "handling leap seconds honestly" when the defined behavior implies that when doing a conversion from seconds_since_epoch value to a time in hours/minutes/seconds the computer has to give one of two possible hours/minutes/seconds answers for a particular seconds_since_epoch value and then one second later convert the exact same seconds_since_epoch value and convert it into a different hours/minutes/seconds value -- and the conversion has to (by psychic powers?) pick the right answer even converting a stored seconds_since_epoch value. See [ http://www.madore.org/~david/computers/unix-leap-seconds.html ] Also, this definition of "handling leap seconds honestly" is completely unable to handle them "honestly" for dates and times a few years in the future because we have no idea when leap seconds will be added or subtracted. See [ http://www.wired.com/2015/01/torvalds_leapsecond/ ]. --Guy Macon (talk) 12:38, 18 November 2015 (UTC)[reply]
By "honestly" I simply meant, "in accordance with the definition of UTC", which is that some days have 86,401 seconds in them, an occurrence which the Posix definition of time_t simply cannot handle.
Yes, it is complicated. But it would be a lot less complicated if people would stop assuming that it is written in stone somewhere that the best and the only way of representing time is as seconds since the epoch. That is a notion which was invented by and is convenient for computer programmers, but it is a notion which has now so constricted our thinking that we are on the verge of abandoning leap seconds because it looks like they're practically impossible to handle correctly.
(I haven't read Thanks very much for the two references you cited, though. Very useful. but I mostly know what they're going to say, because I've read plenty of others, and I do understand the objections to trying to handle leap seconds by other than the current ghastly kludges.)
Here's a thought experiment to help move the argument past the but-if-we-honor-leap-seconds-we-can't-even-compute-the-difference-between-two-timestamps-without-consulting-an-external-table concerns. This year my birthday fell on October 2 = 1443744000 UTC. Suppose I want to know when my birthday will be next year. I compute 1443744000 + 365 × 24 × 60 × 60 = 1475280000. Converting that from UTC back to a calendar date I get... October 1! Wait a minute, what went wrong? —Steve Summit (talk) 13:13, 18 November 2015 (UTC) [edited 15:29, 18 November 2015 (UTC)][reply]
But really, I don't want to get in a long argument here about whether leap seconds are a good idea or not, or how hard they are to handle. The question I asked in the section above and am looking for answers to is simply, have systems handled them by other than the Posix definition? (Feel free to take this to my talk page if you think I'm being too heretical by even asking the question.) —Steve Summit (talk) 13:19, 18 November 2015 (UTC)[reply]
Non-POSIX time and date? There are many such systems.
The Time of Day Clock on the S370/zSeries IBM mainframe is a 64-bit count of 2^−12 microsecond (0.244 ns) units starting at 1 January 1900.
In MS-DOS, The date timestamps stored in FAT are in this format:
  • 7 bits: binary starting from 1980 (0-119) (years 1980 to 2099)
  • 4 bits: binary month number (1-12)
  • 5 bits: binary day number (1-31)
  • 5 bits: binary number of hours (0-23)
  • 6 bits: binary number of minutes (0-59)
  • 5 bits: binary number of two-second periods (0-29) (0 to 58 seconds)
--Guy Macon (talk) 15:28, 18 November 2015 (UTC)[reply]
The problem with Steve's thought experiment is that birthdays are usually commemorated on the same calendar date each year. Since 2016 is a leap year, one must add 366 days, not 365. One of the problems with all the time/date scales is that we're missing one: observe true UTC on the date the leap second occurs, but treat all days in the past or future as containing 86,400 seconds. For any historical record that was recorded in true UTC, or any future event scheduled in 86,400 second days but which must start according to a clock that displays UTC, the people involved are officially on their own and should do what they think is right. We have no name for this concept.
This concept is really not much different than how the law treats time in most situations. The people involved in an event record the time using whatever equipment and procedures they think are appropriate, and if they later disagree and can't settle it among themselves, they go to court and get a decision that applies to that one particular situation. Jc3s5h (talk) 17:14, 18 November 2015 (UTC)[reply]
To expand on the above, for some people you need to add 1460 days to get to the next birthday. This is a major plot point in The Pirates of Penzance; Frederic was indentured until his 21st birthday, but was born on 29 February and thus will be released when he is in his 80s. Leap years, like leap seconds, case all sorts of complications. I say we should simply build huge rocket engines on the equator and adjust the rate at which the earth rotates. --Guy Macon (talk) 01:02, 19 November 2015 (UTC)[reply]
You need to credit Larry Niven, for One Face --Trovatore (talk) 01:15, 19 November 2015 (UTC) But nice finessing of the issue of whether Gilbert realized 1900 was not a leap year! --Trovatore (talk) 01:25, 19 November 2015 (UTC) [reply]

Guy Macon's take on z/Enterprise System seems to account for only one of two possible ways it can be set up. If I understand Leap Seconds and Server Time Protocol (STP) correctly, installations that don't need accurate time around the time of a leap second ("accurate" meaning an error of well under a second) can just set the number of accumulated leap seconds to 0 and not schedule any leap second insertions. When the external time source starts reporting a time one second different from the time-of-day (TOD) clock, the TOD will be steered to agree with they external source. An installation that needs all recorded times to be accurate can set the accumulated leap seconds and schedule leap seconds. When this is done, the description of the machine instructions and TOD clock contained in the z System Principles of Operation will be correct; the time of day clock will contain time steps since the IBM standard epoch, 0 AM January 1, 1900 UTC [sic; it's really extrapolated back from 1 January 1972 UTC with no leap seconds between 1900 and 1972]. When used in the more accurate setup, the accumulated leap seconds are applied to the TOD clock before comparing to the external time source, so the TOD clock will only be steered for real clock errors, such as the crystal in the TOD clock having a slightly different frequency than the atomic clocks that control the external time source.

If the system is set up in the more accurate mode, a machine instruction is available to report the value of the TOD clock; when converted to a number of seconds and added to 0 AM 1 January 1900, this will give the reading on a time scale that agrees with UTC on 1 January 1972, and counts all leap seconds. There is another form of the machine instruction that will report the UTC time of day or the local time of day.

So I infer the way the accurate setup works is that it can record the actual UTC time of day of any event to a sub-second accuracy, even immediately before or after a leap second. But since z/OS spins during the leap second, the system thinks that no events occurred during the leap second. For example, there would have been no need to record an event as 30 June 2015 23:59:60.5 UTC because as far as the system is concerned, no event occurred at that time. Jc3s5h (talk) 15:41, 19 November 2015 (UTC)[reply]

Thanks! I had missed the second method. Good information.
So far all leap seconds have been positive (added seconds). In the case of a negative leap second (perhaps someone overdoes the rockets on the equator...) the "less accurate" technique seems like it wold work just fine (steering to the correct time provided by the external time source at a rate of approximately 1 second per 7 hours works either way without disruption) but I am not sure I fully understand the "more accurate" technique in the case of a negative leap second. Am I correct in assuming that it simply has a single minute that is 59 seconds long? --Guy Macon (talk) 17:40, 19 November 2015 (UTC)[reply]
The sources I've found don't go into enough detail to say what would happen in the case of a negative leap second. My guess would be that the TOD clock would just keep counting as it always does, perhaps even if the system is otherwise powered down. I would need to think about it for a bit to figure out if there would be any need to spin the OS to prevent two different TOD readings that have the same broken-down UTC date/time, or two different UTC date/times that have the same TOD reading. Jc3s5h (talk) 18:39, 19 November 2015 (UTC)[reply]
As far as a running process is concerned, a negative leap second is not much different than being swapped out for a second. As far as a kernel is concerned, it's not much different from a virtual machine saving its state to a file, shipping that file across the net to some other VM host, and resuming that VM a second or more later. Steven L Allen (talk) 18:58, 19 November 2015 (UTC)[reply]
I've done a few calculations. Imagine there is a negative leap second at the end of 2016. A z/Enterprise System with the more accurate setup would report at 23:59:58.99999 Dec. 30 2016 that the TOD clock reads 3,692,217,624,999,990 microseconds. Ten microseconds later the TOD clock would read 3,692,217,625,000,000 which would be reported as 00:00:00 Jan. 1, 2015 UTC. So as far as a running process was concerned, as long as it was only looking at the unconverted value of the TOD clock, nothing much happened except the passage of 10 microseconds. Only if the process issued a machine instruction demanding the time of day in UTC would anything out of the ordinary happen, namely that the second named 23:59:59 never happened. Jc3s5h (talk) 21:38, 19 November 2015 (UTC)[reply]

Windows 7

[edit]

Can I copy my disk version of windows 7 to another drive in case my first drive fails? --31.55.64.160 (talk) 02:52, 18 November 2015 (UTC)[reply]

Sure, if you use Disk cloning software like Clonezilla. FrameDrag (talk) 12:55, 18 November 2015 (UTC)[reply]
What about using the inbuilt backup feature in W7? That says it creates an iso image of the os. But can you use this image to reinstall Windows onto a clean disk?31.55.64.110 (talk) 21:00, 18 November 2015 (UTC)[reply]
It works perfectly and does exactly what you want. The problem is that you need a third hard disk (you can't save to or restore from the same disk) and it has to have enough free space. If you have that, Windows backup works great. I have done this several times. It may take a couple of days to do the save and a couple more for the restore if you are talking about a full 4TB drive, but in the case of smaller/less full drives it will be a lot faster. --Guy Macon (talk) 01:10, 19 November 2015 (UTC)[reply]
Can you explain further about having to use 3 hard disks? I cant grasp what you are saying.--178.104.65.199 (talk) 16:37, 19 November 2015 (UTC)[reply]
Let's say you have two hard disks, which I will call "OLD" and "NEW". You have Windows 7 on OLD and you want to put Windows 7 on NEW. You could use the disk cloning software that FrameDrag mentioned above, but you want to do it using windows backup. So your first step is to back up OLD. Where do you plan on putting the backup file? You can't back up OLD on OLD - Windows backup doesn't allow that. So you put it on NEW -- your only remaining choice. Then you find that you cannot restore a backup file on NEW to NEW - Windows backup doesn't allow that. So you can't put the backup file on OLD and you can't put the backup file on NEW. Where do you put it? --Guy Macon (talk) 18:05, 19 November 2015 (UTC)[reply]

How do I create a website with an unusual URL extension?

[edit]

I see a lot of weird URL extensions on this website: https://iwantmyname.com/domains/new-gtld-domain-extensions I'm just not 100% sure what's the process behind these unusual URLs as opposed to just using dot com. Are they free for everyone to use, or did this company somehow gain exclusive rights to them? Can I buy a website through, say, WordPress or GoDaddy and just use a weird suffix? 2605:6000:EDC9:7B00:8C5C:47A8:2805:69A7 (talk) 03:59, 18 November 2015 (UTC)[reply]

The "URL extensions" are top-level domains. Ultimately ICANN decides what TLDs can exist. They've chosen to do this by soliciting proposals from private parties, with a $185,000 application fee, and with the submitting party getting control of the domain if it's approved. The benefit for ICANN appears to be that they make tons of money (more than 1000 applications according to this, so over $185 million in fees). The benefit for winners is that they get to collect fees from registrants; it's probably especially lucrative for broad domains like .biz or .app where many companies will feel they have to register their trademarks just to protect them. The benefit to everyone else is unclear. But yes, you can buy subdomains of the new TLDs, from the controlling entity or a reseller, for a price ranging from cheap to thousands of dollars. -- BenRG (talk) 05:51, 18 November 2015 (UTC)[reply]
To answer the "Can I buy a website through, say, WordPress or GoDaddy and just use a weird suffix?" question, if you set up your own server you can use any URL you want, but good luck convincing the rest of the internet to connect to it. WordPress and GoDaddy will only sell you domains that the rest of the internet has already agreed to connect to, like .com or .biz.
iwantmyname.com is almost certainly a scam. They are advertising domains like .army that they have no rights to and never will. Stick with a normal internet provider (I personally like pair.com). --Guy Macon (talk) 15:39, 18 November 2015 (UTC)[reply]
I agree. "We recommend pre-ording" is a bit of a giveaway.--Shantavira|feed me 08:36, 19 November 2015 (UTC)[reply]
I can't comment on whether the site is a scam but "We recommend pre-ording" if we ignore the typo is something many legitimate domain name sellers are pushing, as BenRG has said partly because many are pushing people to get their name before someone else does. It may be a bit dodgy, but isn't a clear indication the site is a scam.

I also find Guy Macon's comment very confusing. .army (which shouldn't be confused with .mil) appears to be a relatively open TLD [1] (that link itself may be of interest) so it's likely many resellers are able to provide .army domain names. GoDaddy does provide .army domain names [2], as they do for 1207 other TLDs [3] (well I think some of these don't have registration yet). WordPress since they aren't really that involved in the domain process are far more limited [4], that includes missing out on most country code TLDs even those with relatively open policies.

Whether or not people will think it's a good idea, the myriad of gTLDs which now exist are what the rest of the world has agreed to connect to as part of the ICANN process. Here are 4 .army domains names I found from a quick internet (Google in this case) search most people reading this thread should be able to connect to http://www.fail.army/ , https://thejack.army/ , http://www.forexpeace.army/ , http://davids.army/ . Here's a .navy http://www.volleyball.navy , here's a .airforce http://chiptune.airforce/ , here's a .mba http://www.faisal.mba/ , here's a really weird choice for a .cricket http://womensiceskates.cricket/ (may be it's spam or something). Some of those are redirects but you can easily see my link is to a .whatever domain (incidentally, it sounds like .whatever was a gTLD proposal, but I didn't find precisely what happened to it). Or feel free to type it out, or simply search for your own favourite example (.example is however one TLD which should never exist as per our article).

Nil Einne (talk) 18:01, 19 November 2015 (UTC)[reply]

I was clearly mistaken about .army. It is supposed to be limited to defense contractors but http://davids.army/ is not a defense contractor. Thanks for the correction. --Guy Macon (talk) 18:14, 19 November 2015 (UTC)[reply]
Not sure why you think it's supposed to be limited to defence contractors. No where in Demand Media's application do I see anything about limiting it to defence contractor, and I'm not sure how well such a thing translates anyway, does it mean the same thing in Iceland as it does in Russia as it does in China as it does in the US? There was concern over the possibility of confusion with official websites from the US, Australian and Indian governments, but Demand appears to have allayed that concern sufficient for ICANN with their anti abuse policies. Defence contractors and people associated with the .army may have been the target market, but that's a distinct thing. My impression is most of the new gTLDs intentionally had no specific restrictions like that. Our List of Internet top-level domains#ICANN-era generic top-level domains seems to support that view although most if is uncited.

BTW, I wanted to add to the above but got an edit conflict, wordPress will likely work with these domains if you register them somewhere else, albeit with possible teething or other such problems [5]. Here's 2 Wordpress sites under .army domains https://robots.army/ & http://seo.army/partner-view/wordpress/

I did however notice in the .army agreement [6] it does say

While ICANN has encouraged and will continue to encourage universal acceptance of all top-level domain strings across the Internet, certain top-level domain strings may encounter difficulty in acceptance by ISPs and webhosters and/or validation by web applications. Registry Operator shall be responsible for ensuring to its satisfaction the technical feasibility of the TLD string prior to entering into this Agreement.

But I'm not aware any ISPs have intentionally blocked any of the new gTLDs except for the ones like .porn which may primarily provide content illegal in local jurisdictions. Any problems which more innocous domains like .army are probably simply legacy issues that no one bothered to fix (smilar to the WordPress issues that may have cropped up). I don't see any mention of even India blocking it despite their strong concern when it was proposed. And 2 random name servers that I think are Indian seem to look up robots.arm fine.

Nil Einne (talk) 19:07, 19 November 2015 (UTC)[reply]

C \ Assembler oriented questions

[edit]

Hello, I'm not a C\Assembler programmer, and I ask the question general knowledge only. Thanks in advance:

1. A programmer writes an operating system in C. Would the programmer have any reason to write some of the OS kernel (C) code in one of the assembler languages too? I just wonder if such combination is a common practice (as of the combination of Node.JS and PHP) or if they are even practical...

2. Is the Shell (CLI\GUI) coming right above the OS kernel?

Ben-Yeudith (talk) 16:39, 18 November 2015 (UTC)[reply]

Yes, typically some very low-level and performance-critical parts of the OS are written in assembler. Most C compilers support inline assembler techniques to make it less painful. As for the CLI/GUI sitting right above the OS kernel: Typically there are at least various abstraction layers (represented as libraries). But at least under UNIXy systems, the shell is a normal OS process that is directly managed by the kernel. Of course, you need some kind of terminal to interact with a shell if its used as a UI (as opposed to a script interpreter). --Stephan Schulz (talk) 17:19, 18 November 2015 (UTC)[reply]
1. - There's usually a few places where it's still necessary to write some assembly, including:
  • at the very start of system execution, when the system is mostly uninitialised and even RAM may be unavailable
  • at the entry and exit points of interrupt service procedures and hardware signal handlers, where the normal prerequisites for the C execution environment may not be available
  • in kernel code, to allow use of specific architectural features (chiefly instructions) which do not map well to a general programming language. E.g. you may find the synchronisation code in the kernel uses the architecture's test-and-set instruction (or one of the other types of atomic instructions listed in that article's see-also section) as a building block for higher level synchronisation operations like semaphores.
  • in general (which can mean drivers or application code) you might find a modest amount of assembly code to support hardware-accelerated features like SIMD instructions or cryptographic operations.
OS kernels are almost always implemented (now) in C or C++ with a smattering of assembly as I've described above. In the past kernels have been written in languages like Pascal or Forth. There have been some research projects which aim to write all (or almost all) of the kernel in a more dynamic language like Java (JNode) and C# (Microsoft's Midori), although the time-and-memory sensitive parts usually need language extensions and require the programmer to use only a limited subset of the language in those places. As languages like Java and C# can compile down to pretty decent assembly, this isn't a totally bonkers proposition. Using a truly dynamic language like Python or Javascript in such settings is much less practical, as in these cases much less can be known for sure about the types (and thus memory layouts) of objects right up until execution time. I daresay that someone could chose again to write in a restricted subset of these languages, eschewing the very flexibility and dynamicism that make them effective, and try to write an OS in e.g. asm.js. But I think that if anything has a chance to displace C (and C++, to the extent that it is used, again as a subset) in kernel development, it's more likely to be a language like Go or Rust. -- Finlay McWalterTalk 17:37, 18 November 2015 (UTC)[reply]
2. The shell is just a program. -- Finlay McWalterTalk 17:37, 18 November 2015 (UTC)[reply]
Another way to look at this problem: you have to use machine-code any time your compiler does a poor job abstracting the machine's implementation-details. In principle, your compiler could special-case a construct written in any higher-level language: in fact, this is the case whenever a compiler "built-in" feature is used, such as the vector C extensions built into gcc, or the clang atomic built-ins. These compilers allow you to write operating system primitives, like efficient locking, in pure C without resorting to machine language. You don't need to know the op-code for atomic-increment or vector-multiply on Intel or ARM or AVR... you can just call a special C function that's recognized by the compiler. However, your "pure C" will only compile if your compiler supports these intrinsic features for some specific machine-architecture - so it's somewhat circuitous to call this "platform-portable". Essentially, you're still writing code with some degree of platform-specificity, but using the syntax of a higher-level language.
Nimur (talk) 17:55, 18 November 2015 (UTC)[reply]

If you want to see a minimal working configuration of C and assembler, look at the "Bare Bones" tutorial of the OS Dev Wiki: http://wiki.osdev.org/Bare_Bones OldTimeNESter (talk) 19:49, 18 November 2015 (UTC)[reply]

external hard drive - sleep or not

[edit]

I got a new external hard drive two weeks ago and it defaults to going to sleep after 30 minutes of inactivity (which I can change). It takes a long time for it to wake up. I know it saves electricity for it to sleep, but does it make it last longer? (I've had way too many external drives fail.) Bubba73 You talkin' to me? 21:56, 18 November 2015 (UTC)[reply]

A lot depends on the design of the hard drive. Both keeping it running and going to sleep cause wear, but different kinds of wear. What I recommend is setting the duration so that in general it goes to sleep when you go to sleep or when you spend all day at work, but stays spinning otherwise. IMO that's the best compromise for maximum like, and it is also the least annoying for the user. --Guy Macon (talk) 01:17, 19 November 2015 (UTC)[reply]
The sleep time can be 10, 15, 30, 45, or 90 minutes, or turned off. The default is 30 minutes (of inactivity, I assume). I generally use it a few times per day. Bubba73 You talkin' to me? 02:23, 19 November 2015 (UTC)[reply]
Well, that limits you. I would just turn off the sleep because computer pauses are so annoying. Experts are users are divided on this issue:
--Guy Macon (talk) 06:51, 19 November 2015 (UTC)[reply]
thanks, I'm not having it not go to sleep. It is too annoying to have to spin up when needed. Also, sometimes programs trying to read a file from it think that something is wrong with the file. Anyhow, it is a 5TB that I got for about $120 after a discount, so if it dies in 4 years instead of 5, it is no big deal, since it is used for backup. Bubba73 You talkin' to me? 00:11, 20 November 2015 (UTC)[reply]
Resolved
Yeah, what little data we have on hard disk drive failure shows us that SMART values like run time, temperature, and spinup-count are poorly correlated with future failures - and so, as Guy Macon's sources note, there isn't strong evidence guiding you what you should do. The evidence does suggest that failures follow a bathtub curve, which means the drive is likely to either fail early (which means it was going to, regardless of what you did) or last a long time (and in the 4 or 5 year timeframe, as you say, the drive will be effectively worthless either way). -- Finlay McWalterTalk 00:37, 20 November 2015 (UTC)[reply]