Wikipedia:Reference desk/Archives/Computing/2016 September 26

From Wikipedia, the free encyclopedia
Computing desk
< September 25 << Aug | September | Oct >> September 27 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 26[edit]

Too much memory![edit]

I recently installed some old software on my laptop (Heroes of Might and Magic III, for the record). The installer came up with a warning saying that the game might run slowly, because it needed 32Mb of RAM, and I only had 4Gb. (I actually have 6Gb). Why would the installer mis-count the memory, and why would it think 4Gb (or 6Gb) is less than 32Mb? Iapetus (talk) 11:54, 26 September 2016 (UTC)[reply]

I would theorize that it's from a time when that much memory wasn't possible, so it assumes an error occurred in reading the amount of memory, and gives you the "not enough memory" error just in case, but still reports the amount it read. As for reading 4GB instead of 6GB, you likely have a 4GB memory card and a 2GB card, and it only read one. (Incidentally, you should use "GB" for gigabytes, as "Gb" is sometimes used for gigabits.) StuRat (talk) 14:44, 26 September 2016 (UTC)[reply]
As StuRat has suggested, this warning is probably due to a software error. Among many plausible root-causes, the programmer who made the game software might have had an insufficiently-thorough handling for unexpected quantities of memory: 4 gigabytes is a magical number, because it will cause an integer overflow if the programmer used a 32 bit integer to store the memory size. There was an ancient time, long ago when "all reasonable programmers" knew that 32-bits was big enough to count everything, especially bytes of available memory ...
Unsafe version- and resource- checking is a sort of a software antipattern - a design-defect that occurs so commonly, professional software engineers recognize its symptoms from a mile away. The original software-author thought they were being a defensive programmer by adding a safety check - they knew (or thought they knew) the program's limitations and tried to protect their program if the memory was insufficient. What actually happened, though, is that they incorrectly implemented this defense, and introduced a bug. In your case, it sounds like the bug is benign - the program simply warns and proceeds... but there are many, many cases in other software in which the exact same design antipattern causes a program to misbehave or terminate.
The moral of this story is that programmers should think very, very, very carefully about their defensive programming tactics. The intent is good, but the execution is sometimes invalid.
Nimur (talk) 16:16, 26 September 2016 (UTC).[reply]
So here's a lengthier response aimed specifically to readers who are also programmers: how can you tell how much available memory is actually available? Well, you could call some cryptic platform-specific system call; you could hard-code the hardware model identifiers into a look-up table and estimate; you can come up with all sorts of nearly-correct ways to count how much memory you could hypothetically allocate. All of these methods are defective. Instead of "counting," why not just allocate all the memory you need and actually handle any failures only if the memory is unavailable? If the program truthfully cannot run unless X bytes are allocated, then try to malloc X bytes ahead of when you require them, and check if that worked! And if your counter-argument was that you wish to avoid paging, the exact same corollary applies: why do you wish to avoid paging? For performance? How fast is paging on (unknown future computer architecture) where your software now executes? How thoroughly do you really understand modern memory management on modern computer hardware? If you have a performance requirement: execute your code, benchmark it, and handle errors if the benchmark is insufficient. Adding heuristics to your software in the hopes of fixing true performance bugs will inevitably lead to unnecessary preemptive failures the moment your heuristics break. You probably don't know enough about the user's computer hardware to design relevant performance-heuristics at the time you author your code.
Real world example: "my code only runs on embedded systems,..." so it'd be safe to hack up an assumption that my registers are 32-bits wide... until wham! - that exact same code starts running in a 64-bit context. Hacked-up assumptions yield broken code optimizations and are a bad idea!
So, I say again: programmers need to think very very very carefully about why they add each line of code to check such operations. Nimur (talk) 16:30, 26 September 2016 (UTC) [reply]
The above is good advice if the software is written in C or C++ to run on a general purpose computer that has an operating system, but many programs of which it can be said "my code only runs on embedded systems" fail one or more of those criteria. In particular the "written in C or C++" assumption fails on many early video games, and the "has an operating system" assumption fails on the many embedded systems that run the software on bare hardware. Yes, some so-called "embedded systems" are just repurposed PCs, but most of them are not. --Guy Macon (talk) 17:33, 26 September 2016 (UTC)[reply]
32 bit software can access 4 GB only. --Hans Haase (有问题吗) 18:00, 26 September 2016 (UTC)[reply]
You did not say what OS you are using. In Windows you can try to run this program in a compatibility mode, Windows XP, for instance. Ruslik_Zero 19:41, 26 September 2016 (UTC)[reply]
Windows 7, 64-bit. (I also got a warning about not having (I think) Windows NT or DirectX 6.1). Unlike some other old software I've installed, I didn't get warnings about insufficient disc space ("This software needs X MB - you only have Y GB"). Iapetus (talk) 10:28, 27 September 2016 (UTC)[reply]
"why not just allocate all the memory you need and actually handle any failures only if the memory is unavailable? " - because on halfway modern computers with virtual memory, you will almost always get all the memory you ask for - thanks to memory overcommitment, the OS will easily not only give you more than your total physical RAM, but even more than physical RAM and swap combined. In the first case, if the program actually need that memory, the computer will swap a lot, running slow, and putting quite a few read/write cycles on the underlying mass storage device (not to good with small consumer SSDs). In the second case, the program, or another program, will potentially crash at an unpredictable time, because then OS cannot actually fulfil the need for physical memory to back the virtual memory it has allocated for the processes. Both are situations that are not very attractive. --Stephan Schulz (talk) 23:18, 26 September 2016 (UTC)[reply]
Stephan Schulz, I entirely agree with your statements - but my point still stands: if your program actually requires such resources, and such resources will cause a performance or stability problem, then nothing is gained by delaying the discovery of that problem; nor by obfuscating that problem using broken heuristics. The software is already checking resource-availability to determine if the resources meet minimum requirements - my complaint is that the method by which the software performs this check is frequently implemented inappropriately, as this question exemplified.
Clearly, the original question demonstrated a scenario where the software "incorrectly guessed" the available memory. Which-ever method used by this particular software to estimate memory-size was implemented in a completely incorrect fashion. Nothing about the method used in that software - which yielded a totally wrong size for available memory! - would have precluded memory paging, memory overcommitment, or any of the other ill effects you describe. If there is a concern about paging, the programmer should programmatically query the operating system to provide details about paging strategy. The software should not hard-code an assumption that paging will occur if, say, less than 128 megabytes are available, nor any other similarly-ludicrous arbitrary assumption. Paging can occur whenever the memory manager chooses to use it. Your application software should never try to "guess" at that implementation detail. If it matters, query it programmatically.
But if you want to control the implementation of paging to a backing store, there's some bad news.
On most major operating systems, user-space programs simply do not have permission to request memory that is guaranteed never be swapped to backing-store. Such a memory allocation is possible - trivial, even - for code that exists in the kernel. In xnu, this is called "wiring" memory; in Linux, it is informally called "pinning" or "mlock"ing. Why does this privilege extend exclusively to code that lives in (or interacts directly with) the kernel? Because if your program truthfully depends on the implementation-details of memory management performance, your code belongs in the kernel. User programs should not - may not - care about this detail. Not even on developer-friendly free software systems like Linux. Your user app - a productivity software, a game, a robot-controller, whatever - must rely on the kernel, or on public APIs backed by kernel code, to efficiently manage memory, and you need to trust that the kernel can do this better than you can. If the kernel swaps your data to a backing store, step back humbly, realize that you write code that lives in userland and that you don't know the details of the hardware. Recognize and admit that the kernel managed your memory in this way to improve your app's performance, subject to the contract you agreed to when you decided to implement for a multi-process system. The kernel is trying to improve performance, not to degrade it - and in any instance which you can show otherwise, you have found a bug in the kernel. File a bug report to your kernel's developers, who will love to see examples demonstrating such an error.
Nimur (talk) 05:34, 27 September 2016 (UTC)[reply]
(EC)32 bit software can access 4 GB only. Maybe I'm being a bit pedantic but this statement is so imprecise as to actually be more misleading than correct. Firstly, memory is memory, 32bit programs can access more than 4GB of Disk, ask your self why? Disk is still memory and still needs "addressing". But even ignoring that, 32bit software can easily access more than 4GB, if it's written that way, the problem is that windows programs rely on WINDOWS for memory management, they can only use as much memory as windows presents. Even ignoring that, even 32-bit windows can address more than 4GB of RAM using software, it's called Physical Address Extension. The ONLY thing stopping windows addressing more than 4GB of RAM (without PAE) is that it was originally written without that functionality. It's NOT any kind of "inherent" limitation of 32-bit architecture. Turing complete is Turing complete. My favorite link that someone posted here recently, which is relevant here too is someone booted a 32bit Linux distro on an 8bit microcomputer running 32bit emulation in software. Granted it took 2 hours to boot up, but there was nothing "physically/virtually" stopping it Vespine (talk) 23:29, 26 September 2016 (UTC)[reply]
Oh dear, this explanation is at least as flawed as what you are trying to correct. With PAE, a single program is still limited to 4 GB of virtual memory. This is unavoidable due to the architecture of the 32-bit instruction set. Address registers, etc. are 32 bits wide. PAE allows the entire computer to have access to more than 4 GB of physical memory. As our article on Physical Address Extension says: "The 32-bit size of the virtual address is not changed, so regular application software continues to use instructions with 32-bit addresses and (in a flat memory model) is limited to 4 gigabytes of virtual address space." You can't write a single program that accesses more than 4 GB of memory, even with PAE. The only way to increase that limit would be to change the instruction set and/or increase register width, in which case it's no longer a 32 bit architecture. Or, to use a segmented (non-flat) address space, but this is rarely if ever done with 32 bit programs. As to why a 32 bit program can access more than 4 GB of disk space, it's because disk space is not addressed with byte-granularity as is used to address memory. Disks are addressed with sector-granularity, so that with a 32 bit disk address register, the program can address 4 billion sectors of disk space, rather than 4 billion bytes. If the sector size is 512 bytes, as is common, disk space up to 2 TB can be addressed (but no more than that without changes to the disk controller architecture). CodeTalker (talk) 00:32, 27 September 2016 (UTC)[reply]
I don't disagree with anything you wrote. I was probably a bit fast an loose because soon it would turn into a multi page epic :) "With PAE, a single program is still limited to 4 GB of virtual memory." I didn't contradict that. "PAE allows the entire computer to have access to more than 4 GB of physical memory" correct, that's what I thought I wrote. You can't write a single program that accesses more than 4 GB of memory, even with PAE. yes, because windows can not allocate that memory to the program, not because it's "impossible" to write a program that uses more than 4GB of ram? No? Yes you would need a different instruction set, yes it might be impractical, it might be slow, it might be cumbersome, but my point is it's not impossible, it's not inherent. Vespine (talk) 00:49, 27 September 2016 (UTC)[reply]
It's perhaps worth remembering a single program can use more than 4 GB of physical RAM unless you use some weird definition of "single program", it just can't use it simultaneously (at least I don't think it can). On Windows, that's what Address Windowing Extensions was for [1] [2]. It's farless useful than having 64bit (or more than 32 bit) of address space and being able to use more than 4GB just as you would 4GB rather than juggling stuff allocated in RAM, and it was never supported on consumer versions of Windows, and for these and other reasons few (but not no) programs used it, still we need to be careful how we describe the limits. Nil Einne (talk) 06:55, 1 October 2016 (UTC)[reply]
Just to be annoying, my the point I was trying to make is that you could make a computer that had 2 blocks of RAM with 4GB each: 4GB of A memory and 4GB of B memory, it would still be "32bit" architecture, it would not be as fast as having a single dedicated directly addressed block, there are good reasons why 32 bit computers were NOT designed that way, and why there were no "2 blocks of 4GB" designed as an upgrade (that i'm aware of), instead we went straight to 64bit, I guess my only point is that just because something is 32bit, does NOT mean it can't use more than 4GB of memory. It only means that in a computer with a specific OS and a specific architecture, using programs written for that OS, you can't address more than 4GB of physical RAM. Vespine (talk) 01:09, 27 September 2016 (UTC)[reply]
Putting different blocks of memory in separate address-spaces is essential to creating a non-uniform memory access (NUMA) computer - it is a real thing and it commonly exists. The most obvious example that would be familiar to many readers are certain GPUs that have dedicated video RAM. In some implementations, that video-memory has its own address space. NUMA hardware- and software- designs also appear in lots of other computing applications, most commonly in asymmetric parallel computing architectures. I would agree that this architecture is "annoying" - it is a frequent source if difficult bugs and performance-issues; but it exists, nonetheless, because it sometimes has subtle application-specific benefits. Nimur (talk) 05:34, 27 September 2016 (UTC)[reply]
Very good example, I didn't even think of that! The force is strong with Nimur.. This was of course not implemented in 32bit windows, where video memory, even if it was separate from the system RAM still counted towards the memory limit. Vespine (talk) 23:14, 27 September 2016 (UTC)[reply]
...one of which I encounter all of the time. In the embedded systems I work with, it is typical to have a largish amount of read-only memory (ROM) and a smaller amount of read-write memory (RAM). --Guy Macon (talk) 06:36, 27 September 2016 (UTC)[reply]
Checking if there's enough memory is silly, like checking if a file exists prior to opening it. The file might get removed (on a multitasking system) in the time between stat() and creat(), for all the programmer knows. Asmrulz (talk) 14:14, 27 September 2016 (UTC)[reply]
But the memory check is just a "system recommendation", it's not halting the process, (in this case). Surely if a game runs well with 2GB of ram, but runs like crap with 1GB, it's not a terrible thing to just pop up a message that says, this game needs 2GB to run well, but you only have 1GB, it will run but might be crap. A bit late in the post but this just reminded me of the old DOS games that would be "timed" off the system clock when most computer ran around 4MHz, running them on a "modern" computer would cause them to race through at thousands of times the normal "speed". Vespine (talk) 23:26, 27 September 2016 (UTC)[reply]

I agree with Vespine here. Also let's concentrate on the actual example. Yes it's obviously true that in some extreme circumstances the game may be able to run with systems with less than 32MB of memory. Notably, while very rare at the time, it's possible that the system could only have 31MB of memory for a variety of reasons. Still such edge cases were at the time correctly IMO judges to be very rare. (Noting that IGPs were only just coming into existance. And I'm not even sure if the Intel740 or other early IGP actually resulted in less RAM being seen on the system.) The most likely case would be something like a system with 24MB. Again with modern SSDs and interfaces etc, probably the game could survive even if the system didn't have enough RAM but these weren't a consideration at the time. So from the programmers and companies technical support POV, what the warning hopefully meant is that people try to run the game on systems would be sufficiently informed that the game wasn't going to run well on their system rather than clogged up the companies tech support with problems caused by insufficient systems. (In fact, in some cases it may be made clear that when you receive such a warning you're not going to be supported if you have problems.) Of course flaws in the detection system can cause unexpected or unnecessary warnings which can also clog up tech support, but we haven't see any evidence that it was something likely at the time, or within the next 5 years or so. Realisticly, few programmers and the people directing them especially in those days were going to worry that much about whether their game may have problems 5 years away, especially when those problems were only confusing warnings.

So from the companies and programmers POV, this warning would hopefully significantly reduced confusion & support requests of many users with minimal increase with their market. Note also I wouldn't call this a system recommendation. It's a recommendation in terms of it's not enforced, but I would expect 32MB is actually the so called system minimum for the game.

Some more modern games and even some non games sometimes have similar warnings, although often via the installer. Programmers would ideally need to give more consideration to such non-normal systems in their warnings. However I expect even in those cases, the majority of times the warnings worked as intended. The people they were warning were indeed likely to have problems.

To address is the issue of minimums not guaranting the game would run acceptable. Of course Nimur has a point that the game can't in any way guarantee it will get the RAM it needs, since the OS should hopefully smartly manage it but will ultimately need to deal with competing demands. (Although IMO management has gotten a lot better than Febuary 1999 which was after all before even Windows 2000 the first NT kernel that you might IMO want to run a game on.) But this is largely irrelevant to the warning mentioned. At least we haven't seen anything suggesting the game tells people with 32Mb of RAM or more that the game will run fine. IIRC there were some games (or system support checkers) which do tell the user their system meets the minimum or even recommended amount. In such cases the programmer and technical support team need to be careful to ensure the messages don't tell the user that they're guaranteed to have no problems due to insufficient RAM (or whatever) because they meet the minimum or even recommended. Still with properly worded information it's likely such problems can be reduced and these tests will work as intended i.e. tell people who's system isn't likely to work well with the game of the fact, without telling people their system is definitely going to work well with the game.

Note in any case, especially in the 1999 of the HOMM 3, the common recommendation was to turn off any other programs as much as possible when running games. Even nowadays some people still recommend that (although IMO rarely necessary). And if the user did have a system meeting the minimum or recommendation, they may actually be able to play the game by doing so. So having them calling tech support may be useful if they can direct the user to close programs which may be taking excessive system resources and the user doesn't mind the closure. Whereas there's little point for a user to call tech support to basically tell them "my system is too crap for the game" when all the tech support can say is "you need to upgrade".

I'm not denying some programs especially games have ill advised warnings which cause excessive confusion. Sometimes even more confusion than benefit. Worse are those that rather than simply warning enforce some compatibility requirement that isn't actually needed. E.g. as an early adopter of Windows x64 in the form of Windows XP x64, I did have to deal with programs which refused to install because the OS wasn't supported (as it's based on Windows Server 2003 and so the version of the OS wasn't something the program was designed to accept). I've dealt with other silly stuff before I can't recall offhand. But that's seperate from whether this particular warning, or all warnings about system minimums or recommendations for games are helpful. Let's not forget it's generally suggested that the complexity of having to handle all the different possible system configs and the problems that can arise is often suggested to be one reason for the reduced interest of companies supporting such systems compared to dedicated gaming consoles in recent history.

Although it should be mentioned in 1999 people were I think much more likely to check minimums and recommendations and know how to compare them to their systems at the time than nowadays. It helped to some extent that megahertz often did matter those days for the systems used by games. So saying a Pentium 100 mhz was far more meaningful than saying a Pentium 2ghz dual core nowadays. Even with graphics, despite the large number of vendors at the time, it was in some ways less confusing than nowadays trying to compare the feature set and particular performance of GPUs, which are often rebranded to newer model numbers with minimal changes etc.

Nil Einne (talk) 06:37, 1 October 2016 (UTC)[reply]

Help! (windows batch script)[edit]

How can I run a Windows batch script without invoking cmd.exe or conhost.exe? Basically I am looking for a non-CLI program that acts like a CLI program, but as far as the OS is concerned is a GUI program so it doesn't invoke cmd.exe or conhost.exe. Thanks! — Preceding unsigned comment added by Undeservingpoor1111111 (talkcontribs) 19:36, 26 September 2016 (UTC)[reply]

You can look at Visual Command Line. Ruslik_Zero 20:43, 26 September 2016 (UTC)[reply]
If you are a programmer, you can use the CreateProcess function to spawn an executable. But in your specific request, you want to run a batch file, which is absolutely necessarily going to require an instance of Cmd.exe: for batch files, Cmd.exe runs the commands sequentially. Batch files are not executable - they are executed inside Cmd.exe. Even if you use a tool like Visual Command Line, it will still create an (invisible or windowless) instance of Cmd.exe on your behalf! If you want other behavior, you can not use a batch file - at least, not using the default tools that Windows provides. Nimur (talk) 22:34, 26 September 2016 (UTC)[reply]
I agree with Nimur. A "batch script" it self doesn't actually do anything, is essentially just a list of CLI commands. I think you need to approach this from the other side: WHAT is it that you actually want the batch script to DO? For a very simple example, if you have a batch script that copies and renames a file, you could probably find a program that does that without using CMD. It depends how complicated your batch script is, but a lot of the commands you can do in a batch file, someone's probably written an equivalent program that does the same thing. Vespine (talk) 23:05, 26 September 2016 (UTC)[reply]
No. I think you both misunderstand. I want an exe to run a batch file as if it was cmd.exe running it, but not actually cmd.exe and not a CLI (which cmd.exe is). Basically a third party program that implements the entire batch script language and can follow batch files but IS NOT cmd.exe. For example lets say you deleted cmd.exe and conhost.exe off your Windows instillation (ignore the fact that the system would be unstable for the moment) and now you try to run a batch file. Obviously it doesn't work. But, ta-da, you have a non-CLI exe program that you can drag and drop your batch file on and it follows it like it was cmd.exe, except it isn't cmd.exe and doesn't rely on conhost.exe because it isn't a CLI. You could also copy the exe to wine or mac and run your batch scripts without cmd.exe. Get it? I'm not interested in if this is a good idea or not or if there are other ways to do it like re-coding the script into bash. The goal is to run an unmodified windows batch file without invoking cmd.exe, but having the script followed and all commands executed as if it were running on cmd.exe. Or to put it another way, I want a cmd.exe emulator that pretends it is cmd.exe Undeservingpoor1111111 (talk) 00:29, 27 September 2016 (UTC)[reply]
You seek a program that is exactly functionally equivalent to Microsoft's Cmd.exe, but that is not Cmd.exe. In principle, that program can exist; perhaps some hobbyist or commercial vendor has made it; perhaps you have the expertise to make it yourself by carefully studying the documentation at Microsoft TechNet. But I am not aware of any already extant such program that meets your needs. Microsoft's Cmd.exe, like much of Windows software, is not free- or open-source software. It would take great effort to create this program, and if implemented correctly, it would be indistinguishable from Cmd.exe, which already exists and can be used at no additional cost if you already run Windows. Perhaps you might find some insight by studying how the developers at DOSBox implemented their emulator; their software is free and open-source; but it only implements a tiny fraction of all features provided by modern Windows command shells. Our article links to other similar emulators. Note that DOSBox emulates the entire machine - its hardware and system software, not just the DOS command-shell program; but - as their code is available to inspect and modify - you can, with sufficient skill and effort, extract only the command shell logic from their software. Nimur (talk) 04:34, 27 September 2016 (UTC)[reply]
Actually, such a program does already exist; the cmd.exe from ReactOS is functionally identical to Windows CMD, open-source, and can even replace CMD on a Windows system with no ill effects. However, since it is a clone of cmd.exe and functions exactly the same way, it still requires conhost.exe (either the Windows or ReactOS version) to work, so doesn't satisfy OPs requirements. 125.141.200.46 (talk) 09:28, 27 September 2016 (UTC)[reply]
I added to the title to make it useful. StuRat (talk) 00:22, 27 September 2016 (UTC) [reply]
It sounds like you are trying to reinvent the wheel, while there are sometimes good reasons to create very similar, or even functionally identical software, necessity is the mother of invention is an idiom that definitely holds true in software. You have a program that does exactly what you need, why would someone write the same program? "if you delete cmd.exe"? Well, presumably you have to "load" your other program onto the computer too, just load cmd.exe again. Vespine (talk) 05:33, 27 September 2016 (UTC)[reply]
I have no experience with them, but Batch File Compilers, which turn a batch file into an exe file, exist. They basically change each line in the batch file into an exec system call I presume; adding a bit of extra code for ifs and gotos. Try Googling for Batch File Compiler. Martin. 93.95.251.162 (talk) 13:56, 27 September 2016 (UTC)[reply]
This sounds like yet another XY problem. What is the ultimate goal you are trying to accomplish? --47.138.165.200 (talk) 02:05, 28 September 2016 (UTC)[reply]
Your post sounds like yet another arrogant Unix user looking down on anyone who does things slightly different to how the IETF document stipulates it should be done. 125.141.200.46 (talk) 21:11, 30 September 2016 (UTC)[reply]