DEV Community

Harsh Mishra
Harsh Mishra

Posted on

Memory Management: Operating System

Guide to Memory Management in Operating Systems


Introduction to Memory Management

Memory management is a fundamental function of an operating system (OS) that handles the allocation, tracking, and optimization of a computer's memory resources. The main objectives of memory management are:

  1. Efficient Utilization: Ensure efficient use of memory space.
  2. Protection: Prevent processes from interfering with each other’s memory.
  3. Flexibility: Allow dynamic allocation and deallocation of memory.
  4. Speed: Enable fast and seamless memory access for processes.

Registers and Addressing

Memory addressing and management involve hardware-level mechanisms, specifically the Memory Management Unit (MMU) and various registers.

Registers

Registers are small, high-speed storage locations directly accessible by the CPU. They store key memory-related information:

  1. Base Register: Holds the starting address of a process's allocated memory space.
  2. Limit Register: Specifies the size (or upper bound) of the allocated memory block. The OS uses these registers to check if memory accesses by a process are within permissible bounds.

Memory Addressing

  1. Logical Address: The address generated by the CPU during program execution.
  2. Physical Address: The actual location in physical memory (RAM).

The MMU converts logical addresses to physical addresses through address translation.

Memory Management Unit (MMU)

The MMU is a hardware component responsible for:

  1. Translating logical addresses to physical addresses.
  2. Enforcing memory protection policies.
  3. Supporting advanced memory techniques like paging and segmentation.

Swapping

Swapping is a memory management technique where processes are temporarily transferred between main memory (RAM) and secondary storage (disk). This allows the system to handle more processes than can fit into RAM at any given time.

How Swapping Works:

  1. When memory demand exceeds available RAM, a process not actively using the CPU is swapped out (moved to disk).
  2. When the process is needed again, it is swapped in (moved back into RAM).

Advantages:

  • Increases multiprogramming.
  • Enables execution of large processes.

Disadvantages:

  • High disk I/O overhead (swap-in and swap-out operations).
  • Slower performance due to frequent disk access.

Contiguous Memory Allocation

This is a memory allocation technique where a process is assigned a single contiguous block of memory. It is simple to implement but prone to fragmentation.

Memory Allocation Techniques

  1. Fixed Partitioning:

    • Divides memory into fixed-sized partitions.
    • Simple but leads to internal fragmentation (unused memory within partitions).
  2. Variable Partitioning:

    • Allocates memory dynamically based on process requirements.
    • Prone to external fragmentation (scattered free memory spaces).

Memory Allocation Strategies

  1. First Fit:

    • Allocates the first memory block large enough for the process.
    • Advantage: Fast allocation.
    • Disadvantage: May cause fragmentation in the early parts of memory.
  2. Best Fit:

    • Allocates the smallest block that satisfies the process size.
    • Advantage: Reduces wastage of memory.
    • Disadvantage: Slower due to the need to search for the best block.
  3. Worst Fit:

    • Allocates the largest available block to the process.
    • Advantage: Leaves larger free spaces for future processes.
    • Disadvantage: May lead to underutilized memory blocks.

Fragmentation

Internal Fragmentation:

  • Occurs when allocated memory is slightly larger than required, leaving unused space within partitions.
  • Example: A process requiring 24 KB allocated 32 KB leaves 8 KB unused.

External Fragmentation:

  • Occurs when free memory is scattered in small blocks across the system, making it difficult to allocate contiguous memory to a process.

Solution:

  • Compaction: Rearranges memory to combine free blocks into a single large block. However, this is time-consuming and requires process relocation.

Paging

Paging is a non-contiguous memory allocation technique that eliminates external fragmentation by dividing memory into fixed-sized units.

How Paging Works:

  1. Pages: The logical memory is divided into fixed-size blocks called pages.
  2. Frames: The physical memory is divided into blocks of the same size as pages, called frames.
  3. Page Table: Maintains the mapping between logical page numbers and physical frame numbers.

Address Translation in Paging

A logical address is divided into:

  1. Page Number (p): Identifies the specific page in the process’s memory.
  2. Page Offset (d): Specifies the exact location within a page.

Example:
Logical Address = 17-bit, Page Size = 4 KB

  • Page Number: 17 - log2(4 KB) = 7 bits
  • Offset: log2(4 KB) = 10 bits

Advantages:

  • Eliminates external fragmentation.
  • Allows efficient use of memory.

Disadvantages:

  • Requires additional memory for the page table.
  • Slower address translation compared to contiguous allocation.

Translation Lookaside Buffer (TLB)

The TLB is a hardware cache that stores a subset of page table entries for faster access.

Working:

  1. When a logical address is translated, the TLB is checked first.
  2. If the TLB contains the required page table entry (TLB hit), the physical address is obtained directly.
  3. If the required entry is not in the TLB (TLB miss), the page table is accessed, and the entry is loaded into the TLB.

Advantages:

  • Speeds up address translation by avoiding frequent memory accesses.
  • Reduces the time complexity of paging.

Disadvantages:

  • Limited size (can store only a subset of page table entries).
  • Increases system complexity.

Memory Allocation in Paging

Memory is allocated in terms of pages and frames. Processes are allocated frames in physical memory, but they only see their own pages in logical memory.

Example of Address Translation:

  • Logical Address: Page Number = 2, Offset = 1024
  • Page Table: | Page Number | Frame Number | |-------------|--------------| | 0 | 5 | | 1 | 9 | | 2 | 7 |

Physical Address: Frame Number = 7, Offset = 1024


Virtual Memory

Virtual memory is a memory management technique that allows the execution of processes that may not be entirely in physical memory. It separates logical memory (used by applications) from physical memory (RAM) by extending the system’s memory using a portion of secondary storage (e.g., a hard disk or SSD).

Key Features of Virtual Memory:

  1. Logical Address Space: Processes operate within a larger logical address space than the actual physical memory size.
  2. Demand Paging: Only portions of a process's memory are loaded into physical memory as needed.
  3. Page Replacement: Determines how memory pages are replaced when physical memory is full.

Demand Paging

Demand paging is a virtual memory implementation where pages are only loaded into memory when they are needed (on demand).

Working of Demand Paging:

  1. When a process tries to access a page that is not in memory, a page fault occurs.
  2. The OS interrupts the process and loads the required page from secondary storage into memory.
  3. The page table is updated to reflect the new mapping.
  4. The process resumes execution.

Advantages:

  • Reduces memory wastage by loading only necessary pages.
  • Supports large processes by allowing partial loading into memory.

Disadvantages:

  • Frequent page faults can degrade performance.
  • Higher dependency on disk I/O.

Page Fault

A page fault occurs when a process attempts to access a page that is not currently in physical memory.

Handling a Page Fault:

  1. The process generates an interrupt.
  2. The OS identifies the missing page and locates it on disk.
  3. If memory is full, the OS uses a page replacement algorithm to free a frame.
  4. The required page is loaded into memory.
  5. The page table is updated, and the process resumes.

Page Fault Rate:

  • A measure of system performance in virtual memory.
  • Low Rate: Efficient memory utilization.
  • High Rate: May indicate thrashing.

Page Replacement Algorithms

When a page fault occurs and memory is full, the OS must decide which page to replace. The goal is to minimize page faults while maintaining efficiency.

1. First-In-First-Out (FIFO)

  • Replaces the oldest page in memory.
  • Maintains a queue of pages in the order they were loaded.

Advantages:

  • Simple to implement.

Disadvantages:

  • Can cause Belady’s Anomaly: Increasing the number of page frames may increase page faults.

2. Belady’s Anomaly

  • A counterintuitive situation where increasing the number of page frames leads to more page faults.
  • Specific to certain algorithms like FIFO.

Example:

  1. Consider a reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
  2. With 3 frames, FIFO may generate more page faults than with 4 frames due to the order of replacement.

3. Optimal Page Replacement

  • Replaces the page that will not be used for the longest time in the future.

Advantages:

  • Minimizes page faults.

Disadvantages:

  • Requires future knowledge of page references (not practical in real-time systems).

4. Least Recently Used (LRU)

  • Replaces the page that has not been used for the longest time.
  • Implements aging or tracking mechanisms to determine usage history.

Advantages:

  • Performs better than FIFO in most cases.
  • Does not suffer from Belady’s Anomaly.

Disadvantages:

  • Requires complex data structures for tracking (e.g., stacks or counters).

Thrashing

Thrashing occurs when the system spends more time handling page faults and swapping pages in and out of memory than executing processes.

Causes:

  1. High Multiprogramming: Too many processes competing for limited memory.
  2. Insufficient Frames: Processes have insufficient memory to hold their working set.
  3. Frequent Page Faults: Constant replacement of pages leads to excessive disk I/O.

Working Set Model:

  • A working set is the set of pages a process actively uses.
  • Thrashing can be mitigated by ensuring each process has enough frames to store its working set.

Solutions to Thrashing:

  1. Reduce Multiprogramming: Limit the number of processes running simultaneously.
  2. Increase RAM: Add more physical memory to the system.
  3. Adjust Page Replacement Algorithms: Use efficient algorithms like LRU.

Conclusion

Memory management is a vital function in modern operating systems, ensuring efficient utilization of memory while maintaining system stability and performance. Techniques like swapping, paging, and the use of TLBs optimize memory allocation and access. By understanding the intricacies of contiguous and non-contiguous memory allocation, along with addressing methods, system designers can achieve a balance between performance and resource efficiency.

Top comments (0)