2 분 소요

Beyong Physical Memory: Policies

  • Cache Management
    • cache for virtual memory pages in the system
      • Need to minimize cache misses
      • Maximize the cache hit
    • Average memory access time
      • cost of disk access is so high in modern systems that even a tiny miss rate will quickly dominate the overall AMAT of running programs.
      • AMAT = TM + (PMiss · TD )
  • Optimal Replacement Policy
    • Optimal policy
      • Cold-start Miss (compulsory miss)
      • Other than that it would be optimal, but it is impossible to build the optimal policy for general-purpose operating system
    • FIFO
      • first-in, first-out replacement
    • Random
      • Random page to replace under memory pressure
      • depends upon how lucky it is
      • Must be better than random
    • LRU
      • Using historical information: recency
      • Using the principle of locality
      • Replace least-recently-used
    • LFU
      • using historical information: frequency
      • using the principle of locality
      • Replace least-frequently-used
    • Other
      • MFU
      • MRU
      • do not work well, ignore the locality
  • Workload
    • No locality
      • when there is no locality in the workload, it doesn’t matter much which realistic policy you are using; LRU, FIFO, and Random all perform the same, with the hit rate exactly determined by the size of the cache. Second, when the cache is large enough to fit the entire workload, it also doesn’t matter which policy you use;
    • With the locality(80-20 locality)
      • LRU does better, as it is more likely to hold onto the hot pages; as those pages have been referred to frequently in the past, they are likely to be referred to again in the near future
    • Looping-Sequential Workload
      • pages have been referred to frequently in the past, they are likely to be referred to again in the near future. Optimal once again does better, showing that LRU’s historical information is not perfect
  • Historical algorithm
    • LRU requires additional things and works
    • One solution → time keeper hardware, but it is not desirable for huge computer system
    • Approximating LRU → currently using
      • use a bit (reference bit)
      • Clock algorithm (a circular list)
        • The clock hand points to some particular page
        • When the replacement must occur, check that currently-pointed to page P has a useful bit of 1 or 0
        • The algorithm continues until it finds a use bit that is set to 0, implying this page has not been recently used
        • nice property of not repeatedly scanning through all of the memory looking for an unused page.
      • scan resistance
  • Dirty pages
    • modified bit or dirty bit
      • scan for unused and clean pages to evict first; fail to find those, then for unused pages that are dirty, and so forth.
  • Page selection policy
    • Demand paging → OS brings the page into memory when it is accessed, “on-demand”
    • OS could guess that a page is about to be used, and thus bring it in ahead of time; this behavior is known as prefetching
  • Os writes pages out to disk
    • Collect a number of pending writes and writes at once
    • Clustering or grouping of writes
  • Trashing
    • When the demands exceed the available memory space
    • Early system → admission control
      • could decide not to run the subset, with the hope of reducing set of the process working sets fit in memory and thus can make the process
    • Current
      • out-of-memory killer
        • memory is oversubscribed, chooses a memory-intensive process and kills it

태그:

카테고리:

업데이트: