Scientific Computing is a vast domain! There are thousands of “scientific” applications and it is often the case that what you are working with is based on your own code development efforts. Performance bottlenecks can arise from many types of hardware, software, and job-run characteristics. Recommendations on “system requirements” published by software vendors (or developers) may not be ideal. They could be based on outdated testing or limited configuration variation. However, we are here to help you with that
What CPU would be best for Scientific Computing?
There are two main choices: Intel Xeon (single or dual socket) and AMD
Thread ripper Pro / EPYC (which are based on the same technology). For the majority of cases our recommendation is single socket processors like Xeon-W and Thread Ripper Pro. These CPUs offer options with high core counts and large memory capacity without the need for the complexity, expense, and memory & core binding complications of dual socket systems.
Do more CPU cores make scientific computing faster?
This depends on two main factors:
- The parallel scalability of your application
- The memory-bound character of your application.
GPU
For most scientific computing applications, you do not need a high-end graphics card. An integrated graphics card is adequate, But If your use for the GPU is scientific visualisation, then we recommend a higher end NVIDIA RTX A-series card like the A4000 or A5000.
NVIDIA’s “consumer” GeForce GPUs are also an option. Anything from the RTX 3060 to RTX 4090 are very good. These GPUs are also excellent for more demanding 3D display requirements. Don’t want to go Through all of this Call us at our toll free number 18003092944
Fortunately, many scientific applications that have GPU acceleration work with single precision (FP32). In this case the higher end RTX GPUs offer good performance and relatively low cost but may be difficult to configure it in a system with more than two GPUs because of cooling design and physical size.
VRAM (video memory)
This can vary depending on the application. Many applications will give good acceleration with as little as 12GB of GPU memory. However, if you are working with large jobs or big data sets then 24GB (A5000, RTX 3090) or 48GB (A6000) may be required.
How much RAM does scientific computing need?
Since there are so many potential applications and job sizes this is highly dependent on the specific use case. For workflows focused on CPU-based calculations, 256 to 512GB is fairly typical – and even 1TB is not unheard of.
Storage (Hard Drives)
A good general recommendation is to use a highly performant NVMe drive of capacity 1TB as the main system drive – for the OS and applications. You may be able to configure additional NVMe storage for data needs, there are larger capacities available with “standard”
(SATA based) SSDs.It is likely much better to increase your RAM size, as memory is orders of magnitude faster than even high-speed SSDs.
Looking for a Scientific Computing Workstation? DM to Get started!
Check out our catalogue of optimised Cinema4D builds here.
We build and ship Custom PCs across India with up to 3 years of Doorstep Warranty & Lifetime Technical Support. We have 3 stores in Hyderabad, Gurgaon & Bangalore. Feel free to visit them or get in touch with us through a call for consultation.