2 분 소요

Paging: Introduction

  • Page
    • Split up our address space into fixed-size units
    • With paging, physical memory is also split into some number of pages as well → page frame
    • Advantages
      • Flexibility → system will be able to support the abstraction of address space
      • simplicity → OS keeps a free list of all pages for this and just grabs the first four free pages off of the list
    • per-process data structure known as a page table
      • Main role: address translation
    • The inverted page table is not a per-process structure
    • To translate the virtual address that the process generated
      • we have to split into VPN (Virtual page number) and offset
      • movl 21, %eax → convert 21 to 010101, bold → offset
    • Where are page tables stored?
      • for 4-KB pages → 12-bit offset required
      • 20-bit VPN implies that there are 2^20 translations that OS would have to manage for each process
      • Assume we need 4 bytes per page table entry to hold the physical translation plus other bits
        • If there is any process, the required bytes will be huge
        • So, we store the page table for each process in memory
    • Simple form → Linear page table
      • for each content of each PTE, we have a number of different bits
        • valid bit: indicate whether the particular translation is valid
          • unused → invalid
        • protection bit: indicating whether the page could be read from
        • present bit: indicate whether this page is in physical memory or on disk
        • dirty bit: indicating whether the page has been modified since it was brought into memory
        • reference bit: to track whether a page has been accessed
          • used in page replacement
      • Paging → slow
        • the system must translate the virtual address
        • Then, the hardware must be where the page table is for the currently running process
        • Assume a single page-table base register contains the physical address of the starting location of the page table
        • Paging requires extra memory access in order to first fetch the translation from the page table
        • page tables will cause the system to run too slowly and take up too much memory.
      • Memory trace
        • The first instruction moves the value zero (shown as $0x0) into the virtual memory address of the location of the array;
        • The second instruction increments the array index held in %eax, and the third instruction compares the contents of that register to the hex value 0x03e8, or decimal 1000.
        • where in virtual memory the code snippet and array are found, as well as the contents and location of the page table.
        • Let’s assume we have a linear (array-based) page table and that it is located at a physical address of 1 KB (1024)
        • Worries
          • First, there is the virtual page the code lives on. Because the page size is 1 KB, virtual address 1024 resides on the second page of the virtual address space
          • ts size is 4000 bytes (1000 integers), and it lives at virtual addresses 40000 through 44000 (not including the last byte). The virtual pages for this decimal range are VPN=39 … VPN=42. Thus, we need mappings for these pages.
        • When it runs, each instruction fetch will generate two memory references: one to the page table to find the physical frame that the instruction resides within, and one to the instruction itself to fetch it to the CPU for processing.
        • one explicit memory reference in the form of the mov instruction; this adds another page table access first (to translate the array virtual address to the correct physical one) and then the array access itself.
        • Need to see the figure carefully -

태그:

카테고리:

업데이트: