Wikipedia:Reference desk/Archives/Computing/2017 June 18

From Wikipedia, the free encyclopedia
Computing desk
< June 17 << May | June | Jul >> Current desk >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 18[edit]

Multiprocessor computing[edit]

In a year or so I may need a multiprocessor computer. I've seen 96 core workstation made by Dell in the past. I think they stopped selling those. In the meantime I want to study what is available. I went to SuperMicro[1], of course. It seems they offer 22 core workstations[2]. I need a workstation, not a server. Some of the websites say it is 22/24 cores. Somebody said that the second number after the slash is the number of "virtual" cores, that is, each real core has an extra one "virtual." XS8-2460V4-4GPU unit (workstation) also has 4 GPU as follows.

Someone suggested yesterday that SuperMicro hardware is an old news and now one can do multiprocessing via cloud (???) and also GPU's. Is it correct?

I would appreciate any comments.

Thank you. --AboutFace 22 (talk) 15:20, 18 June 2017 (UTC)[reply]

It is impossible to properly answer your question without you explaining in detail what you want to do that you believe needs multiprocessing. The answer for someone playing high-end video games or editing IMAX video streams is different than the answer for someone mining bitcoins, and the answer for someone mining bitcoins is different than the answer for someone who is doing complex SQL work on a huge database. --Guy Macon (talk) 15:34, 18 June 2017 (UTC)[reply]
22/24 doesn't make sense. You probably mean 12/24? See our hyperthreading article to understand the difference between actual cores and virtual cores implemented by hyperthreading. CodeTalker (talk) 23:27, 18 June 2017 (UTC)[reply]
The second website he links to does show a 22-core Xeon system for $14,298. Yes, you can do such computing on the cloud. I've seen services that give you a small amount for free and then it costs for more. I looked into that, and for me it was more cost effective to buy several refurbished computers. Bubba73 You talkin' to me? 00:07, 19 June 2017 (UTC)[reply]

This is what I need to do. It is no gaming. It is numerical integration over a 2-D surface. I also need to compute the entire cycle in about 0.1 second. So, the idea is to break this surface into small fragments and use multi-CPU system, using one CPU to integrate over a 1/N area where N is the number of cores. --AboutFace 22 (talk) 01:39, 19 June 2017 (UTC)[reply]

@CodeTalker, yes, I made a mistake, sorry. It was actually 22/44 cores. --AboutFace 22 (talk) 01:42, 19 June 2017 (UTC)[reply]

@Bubba73, thanks. It is a very valuable piece of information. --AboutFace 22 (talk) 01:45, 19 June 2017 (UTC)[reply]

It works with what I'm doing, but I don't know about what you are doing. I have 9 computers (i5s and i7s) that I got refurbished for $145-210 (bare bones) and I run 20-30 copies of the program. But in your problem, it sounds like you need one CPU to control the integration done by the others. Bubba73 You talkin' to me? 02:44, 19 June 2017 (UTC)[reply]

Presumably you need to repeat the process continuously. Your bottleneck may well be interprocess communication. I would recommend prototyping your software and optimising the algorithms first. Then try it out on a cloud system. Only if that convinces you it will work should you invest in real hardware - remember the longer you leave the hardware purchase, the cheaper it gets (though not as quickly as it used to). All the best: Rich Farmbrough, 20:06, 19 June 2017 (UTC).[reply]

Thank you for all suggestions. They really help. --AboutFace 22 (talk) 20:21, 19 June 2017 (UTC)[reply]

The cloud can be used to test, but it probably won't give you a consistent response under 0.1 seconds. Networking delays, and waiting to get access to your VM could easily delay that much. Graeme Bartlett (talk) 11:02, 20 June 2017 (UTC)[reply]
AboutFace 22 has not made it clear that he really needs a result in 0.1 second. It may very well be that what he requires is a sustained rate of 6000 results per minute and desn't care it the system takes 10 seconds to deliver the first result.
Getting back to the topic of multiprocessors, a $500 Nvidia Geforce GTX 1080 video card has 2,560 CUDA cores, and you can install four of them on a modern motherboard. The question is, are those 10,240 cores capable of doing the job AboutFace 22 wants them to do? If so, they may be considerably faster than even 22 general-purpose CPUs. Or they may not be able to do the job at all, forcing him to use the more expensive general-purpose CPUs.
There is another possible trade off between money and results. It may turn out that it makes more sense not to buy more expensive hardware but instead to hire a really good programmer to optimize the bottlenecks in his program. I have personally seen 100X speedups by replacing a poorly-optimized FORTRAN routine with a small assembly-language replacement, leaving the rest of the code as it is. --Guy Macon (talk) 12:14, 20 June 2017 (UTC)[reply]

Very interesting posts and ideas. Thank you. I believe I described my task clearly. It is a numerical integration over a 2-D surface, namely a hemisphere. The process involves many repetitive operations. Optimization of the program is on my agenda and will be done. It is mostly finding or creating a simple database to store intermediate values instead of computing them every time. Using a video card which seems to be built on the multi core principle is something I learned only after posting this thread. How can I program for those cores? Is it possible? --AboutFace 22 (talk) 15:10, 20 June 2017 (UTC)[reply]

That graphics card approach suggested by @Guy Macon really made me very excited. If I only could program even in Assembly language for it, that would be a solution to many of my problems. Thanks, ---AboutFace 22 (talk) 15:18, 20 June 2017 (UTC)[reply]

Well, I think I found all the answers, thanks to @Guy Macon[3]. A very interesting development[4]. All I need is C or C++ and this is what I am doing now. --AboutFace 22 (talk) 15:34, 20 June 2017 (UTC)[reply]

You can use the OpenCL library to run an R program on a high-end graphics card; this is probably more efficient than writing the code yourself in C/C++. Note that configuring the driver on a high-end graphics card can be very tricky: my best advice is to Google the best combination of OS, motherboard, and card, and follow the recommendations exactly (e.g. if it says to use Ubuntu 14.04, don't use 14.05). It took me months to get my card working correctly.OldTimeNESter (talk) 13:59, 22 June 2017 (UTC)[reply]

@OldTimeNESter thank you. I am not familiar with OpenCL but FORTRAN, C, C++ are my territory. I will look into the OpenCL for sure though. --AboutFace 22 (talk) 14:52, 22 June 2017 (UTC)[reply]

@AboutFace 22: What numerical integration algorithm are you using? Note that the best numerical integration algorithm will depend on both the integration problem and your hardware.--Jasper Deng (talk) 06:24, 23 June 2017 (UTC)[reply]