Functions of OS

scheduler (time allocation to programs) ,
memory manager (space allocation to programs)
synchronizer ( avoid conflicts for a resource at the same time)
protection (avoid conflicts for memory)

Scheduling Algo

basically utilize IO time by scheduling another task

SJF is provably optimal

problems with scheduling based on priority – Priority inversion

Switching from one thread to another (A –> B ) can occur when
a) A is complete
b) A is blocked on IO
c) Timer interrupt forcefully preempts A

Synchronization

Synchronization is  needed as  process need to exchange information using a shared region in memory. However, this shared region cannot be accessed in any manner. They needed to be accessed in correct
sequence. Imagine bank balance being updated by two different process. So need to coordinate access among threads

The part of the program where shared memory is accessed is called critical region. Critical regions should be accessed in mutual exclusion
1) No two process may be simultaneously in the same critical region
2) No process running outside the critical region should block another process
3) No process should forever wait in CS
4) No assumptions can be made about speed/number of CPUs

These are called Mutual Exclusion, Progress, Bounded Waiting, Speed Independence

Approaches to Mutual Exclusion

some approaches
a) Disable interrupts in Critical Section
b) Lock Variables
c) Strict Alteration

All these are not satisfactory ..due to busy waiting

so use sleep and wake mechanism. The process instead of busy waiting is put on a queue. This leads to the concept of semaphores

Solving Producer Consumer problem using Semaphore

Producer consumer problems occur for example in Networks (sending and receiving packets), CPU and IO

down(empty_slot)
put item in buffer
up(full_slot)

The above solution works for a single producer consumer
When multiple producers and consumers are present , then a race condition can happen resulting in writing to the same slot by two different producers
To overcome this problem, we need to make sure that only one producer is executing “put item in buffer”. So we need to execute mutual exclusion within a critical section

down(empty_slot)
down(mutex)
put item in buffer
up(mutex)
up(full_slot)

Memory Management

One dimensional addressing becomes cumbersome if different portions of the program have to grow or shrin

So provide several independent address spaces called segments . Now sharing of code/data is easy . Protection is possible

Provides illusion of large memory using Virtual memory concept.

While segmentation is good in the sense that it gives flexibility to load programs anywhere in memory. However, still it suffers from external fragmentation.
i.e, there many be holes in memory that are not large enough to be allocated to any process

To avoid this, Paging is introduced. With paging , the entire memory is divided into fixed size pages. Though there is no wastage due to external fragmentation, there may be wastage within a page.  This is called internal fragmentation

In principle it is similar to (fully associative) caching.  The pages that are referenced are loaded into RAM, others are kept on disk ( Demand Paging )

Now what happens, when there is no space on RAM, one of the pages has to be evicted. This leads to the concept of Page Replacement Algorithms

The easiest method is to used FIFO. The page that was loaded earliest is evicted. But what happens if that page is the frequently accessed one. This policy is not good.  Belady showed in FIFO, increasing memory-size doesn’t lead necessarily to reduced page faults. Though this happens in the rare cases, we cannot increase memory and expect page faults to reduce  (Belady’s anamoly doesn’t occur in LIFO algorithms )

The optimal method is to look which page won’t be referenced for a long time and replace that one. This is the OPTIMAL policy. But it cannot be implemented as we have to look into the future. So the future is approximated by the past behaviour.  Temporal locality says that the pages that are accessed in the near past would be accessed in the near future. Conversely, the pages that  weren’t accessed in the past wouldnot be accessed in future. So replace those pages that are least recently used.

LRU is difficult to implement in hardware.  address translations are required for every instruction , so a software implementation would be too slow.  So NRU (not recently used ) policy is used

LRU Clock Algorithm or Second chance algorithm is the one used on most operating systems

Each page has a reference bit that is set on use. Replace that page whose reference bit is not set.
However it is of no use, if every page’s reference bit is set. We do not know which to replace.
So the reference bit is periodically cleared by the OS
The algorithm is a combination of FIFO + reference bit.
The pages are kept in circular list. Scan the pages, if the reference bit is one, set to zero and proceed. If the bit is zero , evict

Leave a comment

Design a site like this with WordPress.com
Get started