Big memory

From Wikipedia, the free encyclopedia

Big memory computers are machines with a large amount of random-access memory (RAM). The computers are required for databases, graph analytics, or more generally, high-performance computing, data science and big data.[1] Some database systems called in-memory databases are designed to run mostly in memory, rarely if ever retrieving data from disk or flash memory. See list of in-memory databases.

Details[edit]

The performance of big memory systems depends on how the central processing units (CPUs) access the memory, via a conventional memory controller or via non-uniform memory access (NUMA). Performance also depends on the size and design of the CPU cache.

Performance also depends on operating system (OS) design. The huge pages feature in Linux and other OSes can improve the efficiency of virtual memory.[2] The transparent huge pages feature in Linux can offer better performance for some big-memory workloads.[3] The "Large-Page Support" in Microsoft Windows enables server applications to establish large-page memory regions which are typically three orders of magnitude larger than the native page size.[4]

References[edit]

  1. ^ "Efficient Virtual Memory for Big Memory Servers" (PDF). Retrieved 2016-09-24.
  2. ^ "Huge pages part 1 (Introduction)". Retrieved 2016-09-24.
  3. ^ "Transparent huge pages in 2.6.38". Retrieved 2016-09-24.
  4. ^ "Large-Page Support". Retrieved 2016-09-24.