Wikipedia:Reference desk/Archives/Computing/2021 October 30

From Wikipedia, the free encyclopedia
Computing desk
< October 29 << Sep | October | Nov >> October 31 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 30[edit]

Is there a windows program that shows a huge transparent clock at the screen all the time, having priority above all other programs?[edit]

Is there a windows program that shows a huge transparent clock at the screen all the time, having priority above other programs?

I want to do some test related to perception of time, see if looking at all the time while browsing the pc would do something with my perception of time. Is there any program that allow me to put a huge transparent clock at the screen that would be shown all the time, while I am browsing, gaming,....................177.206.39.108 (talk) 00:43, 30 October 2021 (UTC)[reply]

Googling "watermark screen clock" shows some promising results. 41.165.67.114 (talk) 05:40, 30 October 2021 (UTC)[reply]
Rainmeter is an open-source desktop customisation software that lets you add loads of gadgets with different skins. One of those is a clock. You can simply hide everything else by right clicking, then click on the clock and do "Setting > Position > Stay Topmost". — Berrely • TalkContribs 09:40, 30 October 2021 (UTC)[reply]
Thanks for info, rainmeter is the answer and is also easy to deal with.177.206.39.108 (talk) 15:51, 30 October 2021 (UTC)[reply]

Need a Good Reference for the Fact that Software Maintenance Costs More than Development[edit]

I'm writing a paper where I want to say that software maintenance costs significantly more than does actual software development. When I was doing research in software engineering this was something that there was overwhelming empirical justification for. But that was a long time ago and I can't remember the sources and searching hasn't yielded anything really good. Any suggestions? --MadScientistX11 (talk) 21:54, 30 October 2021 (UTC)[reply]

You should stick with newer sources for something like that, and anyway it will vary. But software is considered more disposable now than it was back in the day. It's not exactly more reliable (if anything it is less reliable), but it is easier to fix bugs when they occur, and it's quicker to write. This is largely due to faster computers allowing a lot of dynamic error checking as the program runs, use of garbage collection eliminating classes of memory management errors, etc. Of course this depends on the application area etc., and what you choose to call maintenance. But if you get a typical dev job today, you'll be spending a higher fraction of your time implementing new features (vs. chasing bugs in old stuff) than when most of the old books were written. 2601:648:8202:350:0:0:0:D4A (talk) 10:52, 31 October 2021 (UTC)[reply]
Interesting perspective. But I would like to see some hard data. It's a shame in my opinion that so few academics do actual empirical studies of real software development projects. I know its difficult because few project managers want to waste time contributing to research because they have so many other pressures to deal with but I know it can be done. There was a guy at MCC (a consortium in Austin that no longer exists) named Bill Curtis who did excellent work like this and also a professor I knew at USC named Walt Scacchi. But I'm sure they both retired long ago. I understand your argument and it makes sense, just saying I would like to see some empirical data that supports it. --MadScientistX11 (talk) 20:40, 1 November 2021 (UTC)[reply]
It may be true for relatively newly developed software that maintenance is not the main cost issue, but there is still an awful lot of legacy code out there surviving from custom-built systems – which back in the day included virtually all major software applications. Many C programs that used to compile and run under 32-bit OS X no longer compiled after the update to (solely) 64-bit macOS 10.15 Catalina, affecting all legacy applications using the once popular Carbon API. I think many applications were not rescued but simply abandoned – how to quantify the cost of that loss? A common trick for mass-market software is to let the customers bear the cost of maintenance by charging them for new releases with supposedly "enhanced functionality". It is almost impossible to define the notion of maintenance in a robust way that is independent of the market segment, and the relation between the costs of development and maintenance may be very different for different market segments, which makes this hard to research today.  --Lambiam 00:10, 1 November 2021 (UTC)[reply]
Also a good point. Thanks.--MadScientistX11 (talk) 20:40, 1 November 2021 (UTC)[reply]

What is the Status of Moore's Law?[edit]

I seem to remember that apx. 30 years ago or more I was reading articles that said in 20 years due to the rules of physics that the components on chips could not be squeezed any closer together (sorry for the non-technical way I said that I'm not a HW guy) and Moore's law would be over. But it seems like computers continue to get faster. So I'm wondering: 1) Did the laws of physics change? (Doubtful, I think I would have heard about it) 2) Were the estimates conservative and we still haven't hit the limits yet? 3) We have hit the limits but people don't talk about it because that would decrease the motivation for us to spend money on the latest phone or tablet? 4) (What I think is the closest to the truth) Although we have hit the limits, clever chip and other HW designers keep finding other ways like flash memory to make computers smaller and faster. If 4 is true has there at least bee a slowing down of Moore's law and do people think the days of constant improvement in size/performance are soon to be a thing of the past? --MadScientistX11 (talk) 22:45, 30 October 2021 (UTC)[reply]

This is based on my casual observations over the decades. Usually a new generation of Intel CPU was more than twice as fast as the old one, i.e. 8088 to 80286 to 386 to 486 (maybe) to Pentium. Now a new generation is only a few percent faster than the previous generation.
My main computer is over 6 years old. It used to be that a 6-year-old computer was horribly slow compared to the current ones. That isn't true these days. Bubba73 You talkin' to me? 22:58, 30 October 2021 (UTC)[reply]
And doubling the number transistors these days doesn't double the speed. You can put in twice as many cores, but it doesn't scale linearly. There are other bottlenecks, like memory bandwidth. I've done testing on a four-core hyperthreaded i7. Running eight threads gives about 5.6x the single-core performance, at best. I've had eight threads be between 2.5x and 3x the single-core performance. On an eight-core hyperthreaded Xeon, I've had 16 threads be under 4x the single-core performance, because of the memory access bottleneck. Bubba73 You talkin' to me? 02:59, 31 October 2021 (UTC)[reply]
Hyperthreading is not the same as a full extra core. Only the control logic is doubled, not the actual ALU components. The idea is that a single thread can rarely use all the execution units because of data dependencies (if you compute a*(b+c), the multiplier has to wait for the result of the adder before it can do its thing), so running a second thread, which does not share data dependencies with then first, can use the unoccupied units "for free". How well this works depends on the structure of the code you are running - in the simplified example above, if there only is one multiplier, and both threads need it, one has to wait. --Stephan Schulz (talk) 06:06, 4 November 2021 (UTC)[reply]
Yes, you are right about that. But even in a chip without hyperthreading, the performance normally doesn't scale up the same as the number of cores. Bubba73 You talkin' to me? 06:35, 4 November 2021 (UTC)[reply]
You will find two very different answers to this question because there are two approaches to the answer. If you define Moore's Law to be strictly about the number of transistors on a chip, then it is slowing down. Not only is there a limit to how many you can cram on a chip, there is a lack of need. We don't use disconnected computers now. We use heavily connected computers. So, there is less need for a single computer to have as much horsepower when the work is shared by many other machines. However, if you define Moore's Law to be about processing power, it is accelerating. Video cards alone have drastically increased the processing power of many computers. Interconnectivity increases power as well. With little processors in everything from computers to thermostats to watches, it is easy to see that processing power increases when you connect everything. Add to that the ability to harness the cloud to further increase processing power, and it is easy to understand the claim that processing power is increasing. But, there can be a third answer. If half the people think Moore's Law is slowing and the other half think it is accelerating, the average answer would be that it is holding steady. Right? 97.82.165.112 (talk) 14:29, 1 November 2021 (UTC)[reply]
Excellent points, thank. --MadScientistX11 (talk) 20:43, 1 November 2021 (UTC)[reply]
While I agree with the interconnectivity part in general, there are exceptions such as Cerebras, which is basically a computer consisting of one big chip. If I recall correctly, this chip on its own was equal to several clustered computers in computing power, at much lower power requirement. If the surface area of the chip isn't factored in, I think that this chip helps in getting the average number up. Rmvandijk (talk) 08:09, 3 November 2021 (UTC)[reply]