Operating Systems

12 minute read

What is an Operating System?

The operating system is a piece of software whose job it is to take the base hardware of a device and turn it into a usuable machine for the user. It acts as the intermediary between hardware and user-usage, allowing the user to easily interact directly with the hardware; even when the user does not know they are accessing the hardware.

Examples of operating systems include:

  • Unix
  • Linux
  • Windows
  • Mac OS
  • IOS
  • Android

Essential Managers of Operating Systems

There are 5 main managers of operating systems:

  • Memory manager
  • Process manager
  • Device manager
  • File manager
  • Network manager

The network manager is a new addition to the operating system, previously all operating systems did not have networking capabillities and many still do not.

The Operating Systems manages the continious monitoring of resources and enforcement of policies across the system.

Memory Manager

The Memory Maneger is in charge of the main memory on the computer system.

Its tasks involve:

  • Preserves and protects the space in main memory that is occupied by the operating system
  • Checks validity of each request from memory space. Verifies whether or not a user request is valid.
  • For valid requests it allocates areas of memory not already in use for this.
  • In a multi user system it keeps track of which users are using which section of memory.
  • It deallocates sections of memory once they are finished with.

Processor Manager

Decides how to allocate the Central Processing Unit (CPU)

The tasks of a processor manager are:

  • Creates processes when neccesary to carry out tasks
  • Performs initalisation of new processes
  • Keeps track of the status of processes
  • Assigns processes to the CPU when available
  • Changes process states as events occur
  • Handles termination of proceses on completion or abort
  • Handles inter-process communication
  • Manages process queues and priorisation

Something to note is that a process is not the same as a program.

Device Manager

Monitors every device and control unit

The tasks of the device manager are:

  • Allocates systems devices
  • Deals with multiple requests for same device
  • Performs communication with the device during operation
  • Deallocates devices

File Manager

The File manager keeps track of every file in the system.

Its tasks are:

  • Organises all files
  • Manages location of files
  • Enforces restrictions on files
  • Deals with standard file operations (delete, make new, etc)

Network Manager

Modern operating systems require a network manager. The network manager allows users to share resources while controlling user access to them.

These resources can include hardware suhc as printers or disk drives or software such as files.

Interaction between OS and Managers

Each Operating Systems has specific individual tasks to perform but it is not enough for each to operate on its own. Each manager must be able to work in harmony with the others at around the same time.

Example

A user types in a command at the keyboard to execute a program. The following simplified steps might occur:

Device Manager: Receives keystrokes and builds up command line. Informs process manager that input has been received.

Processor Manager: Activates process awaiting a command. Process checks validity of command and then tries to execute the corresponding program.

File Manager: Checks to see if the program file is already in memory. If not, it finds ito n external storage and issues requests for it.

Device Manager: Communicates with the disk drive and retrives the program file from disk.

This goes on for quite a while, but you get the jist.

Evolution of Operating Systems

In early systems jobs were loaded from punch card,s tape or other physical means. The job may include loading the compiler, assembler etc. The CPU remained idle for much of the time due to waiting for user input. The jobs ran sequentially from start to finish.

Batch systems were invented which passes a job to a human operator and then the operator would group jobs into batches with similar charecteristics which makes more efficient use of resources.

Multi programming

Multi programming is where several programs are loaded into memory simultaenously, sharing the same CPU. When a running program cannot continue (maybe because it is waiting for input from the user) the operating systems switches to another program to run.

img

As you can see in this image the CPU executes whilest other programs are waiting for input.

Multi-Access (Time-sharing)

This is an extension of multiprogramming. The CPU switches rapidly between processes to give an illusion of uninteruppted execution in parallel.

Time-sharing the CPU

img

Each program is allocated a fixed slice of time before the clock interrupts the program.

Interrupts

The time sharing method depends on the abillity to interrupt execution at regular clock intervals. There are many circumstances in which it is desirable to interrupt the normal flow of a program to react to special events such as:

  • User interrupts via keyboard / mouse usage
  • Completion of an I/O task
  • Power on or off

An interrupt is a form of automatical call of a sequence of instructions brought about usuaully by an event occuring outside the normal program execution.

Such a sequence of instructions is known as an interrupt service routine (ISR). Unlike an ordinary subroutine call, an interrupt can take at any point in the execution sequence.

The address of each ISR is usually contained in an interrupt vector.

Certain interuppts (e.g. clock timeout) will not return control to the interupted program. Instead, control is handed to another program (context switch). If the programmer does not know when this will happen, how do we protect the register values of the original program?

The answer is for the OS to save them in a special data structure called the process control block (PCB).

The OS maintains a table of such descriptors, one for each process.

Register values can be reloaded when execution resumes at a later time.

Processes

A program is a representation of an algorithm in some programming language.

A process refers to the activity performed by a computer when executing a program.

A process is created when a program or command is executed.

However the correspondance is not always one to one.

A single program may have several processes.

OS Structure

The OS has:

  • A Central Kernal
    • resides permenetaly in memory
    • performs low-level, frequently needed tasks
  • A set of processes
    • May be created by kernel or to service user activity
  • Processes interact with kernel via system calls
    • Create processes, run programs, open files
  • Kernel and some system processes may operate in privledged mode.

System Initialisation

When the system powers on, an interrupt causes the computer to execute a bootstrap program stored in ROM.

This code initalises the system, runs diagnostic checks, sets up basic I/O. Basically making it ready to be used.

It loads the OS Kernel from boot partition of specified external disk drive and passes control to it.

The Kernel performs further initalisation and creates various system processes.

Further processes may be created because of user activity.

Command Interpreter

Often called a shell.

Accepts and runs commands specified by the user. May be graphical or textual.

The shell is just a normal user-level process

System calls

User processes are not allowed to inspect or alter kernal code, or to make calls directly to its subroutine.

Services are requested via system calls.

Many system calls are given a ‘wrapper’ to make them look like ordinary subroutines.

Together with other useful functions, they form an Application Programmer Interface (API)

Processes

Processes fall under the process manager (discussed earlier)

The OS processor manager is responsible for every aspect of a process.

A process control block contains:

  • unique process ID
  • User ID of process owner
  • process state
  • position in memory
  • accounting stats (time used etc)
  • resources allocated (open files, devices)
  • register values (instruction pointers)

Process states

  • Running
    • On a uniprocessor machine, only one process can be executing at any time. The process that has the CPU is said to be the running state.
    • May be interrupted at end of time slice if no I/O requests or system calls performed.
  • Ready
    • Refers to a process that is able to run, but does not currently have the CPU
  • Waiting

Initially when we create a process it goes into the new state which is where we are initalising the process, where memory is being allocated etc. At some point the state will be in the ready state. The process will then go into the running state, if it reaches the end of its run (if it finishes) it enters into the terminated state.

When a process is completed it doesn’t just stop, you have to take back memory allocated, close files, take back resources etc.

If the process is waiting on an I/O event than it will go into the blocked state which waits for an I/O event.

Otherwise it goes back to the ready state which eventually leads back to the running state.

                                                                  +--------------+
                                                                  |   TERMINATED |
                                                                  |              |
                                                                  +-------^------+
                                                                          |
+-------------+                                                           |
|  New        |                                                           |Exit
|             |                                                           |
+-------------+                          Execution                        |
              |          +----------+------------->  ^---------+----------+
              |          |          |                |         |
              +--------->+  READY   |                | RUNNING |
                         |          |                |         |
                         +----^-----v <--------------+         |
                              |           Timeout    +-+-------+
                              |                        |
                              |                        |
                       IO EVENT                        | IO EVENT
                       COMPLETED                       |
                              |                        |
                              |                        |
                              |           +------------v-+
                              |           |              |
                              +-----------+   BLOCKED    |
                                          |              |
                                          |              |
                                          +--------------+

Process creation

Processes are created (spawned) by other processes.

The original process is the parent and the new process is the child.

We’ll use Unix as an example here.

The exec() system call allows one program to execute another.

  • Stays in the same process
  • New program overwrites caller

What happens is you stay in your current program and you overwrite your current program with a completely new program but stay in the same process.

The fork() system call spawns a new process.

  • Child is identical to parent.

The fork call creates a new process that is identical to the parent.

The wait() system call asks the current process to pause for a few seconds while another process runs.

When you call fork() it returns a value back to the caller and what it returns is different depending on whether you are a child process or a parent address.

If it returns a value less than 0, it’s an error.

If it returns the value 0 it is the child process.

If it is above 0 than it is the parent.

At the startup level, the kernel creates process 0 (the first process).

Kernel level process, so can access kernal code and data structures directly.

Process 1 is ‘init’ (now called ‘systemd’ on Linux)

  • parent of all other processes
  • spawns daemon processes
  • spawns process for user login

When user logs on, the login process checks name and password against stored details, than executes the shell.

The Shell prompts for commands, then forks and execs to carry out those commands.

At logout, shell process dies.

Zombies and Orphans

Parent processes usually wait for their children to die.

If the death of a child is not acknowledged by the parent, the child becomes a ‘zombie’.

Zombie processes have no resources, but are still present in the process table.

If parent dies before its children, the children become ‘orphans’.

The orphans get adopted by Init process.

Daemons

A daemon process is not associated with any user or terminal and runs in the background.

It performs tasks requested of it by another set of processes.

Some examples include:

  • Printer spooler
  • Email servers
  • FTP server

Process Interaction

Processes that do not need oti nteract with any other processes are known as independant processes.

In modern operating systems many processes work together.

The operating system will need some way to syncrhonise these processes.

Syncrhonisation vs Communication

Synchronisation requires us to know when a process reaches a certain point in its operation.

TK complete

Unix Signals

A process can usually be terminated by pressing CRTL + C

This sends a signal to the process. The process responds by aborting.

Signalls can be sent from one process to another using the signal() call.

Signals can be sent from the command line using the kill command:

kill -<signal> <pid>

sometimes the kill signal isn’t always used to kill the process, it can allow you to send any signal to a given process.

You need to specify which signal you want to send and to what process ID you want to send it to.

Responding to Signals

Example kill signals:

1 - hang up
2 - interrupt
3 - quit
6 - aboort
9 - kill
14 - alarm
15 - software termination signal

The process can choose to ignore the signal, it does not have to terminate.

However you cannot ignore the kill signal, it HAS to take effect.

A process can “catch” a signal

Kil all sends the signal to all processes with a given range of PID or with a name or a sub name.

Pipes and Sockets

A pipe is a command that allows it to pipe data into something. It attaches the standard output of one program to the standard input of another.

Sockets

A socket is a communication endpoint of a channel between two processes.

Unlike pipes, the processes:

  • do not have to be related to eachother Unlike processes where each process spawns from process 1, they all have the same lineage sockets do not need this.
  • do not need to be on the same machine
  • do not need to be on the same local-area network (LAN)

When two processes communicate using sockets, one is designated to the server and the other to the client. In some cases it does not matter which one is which.

More usually a daemon server process offers a service to many clients.

Threads

Threads are a sort of mini-process in many modern systems.

A thread represents ana ctivity that can be executed independentally (and in parallel) with other parts of the process.

Each thread must has its own:

  • program counter
  • its own stack space

But unlike any process, threads share:

  • program code
  • data
  • open files

In many systems threads are handled at a higher level than processes and can be switched without involvement of the operating system kernel.

If this is the case, thread switching is very rapid and efficient. Such threads are commonly known as user-level threads.

Threads are super useful in event driven programming.

A thread can be thought of as a light-weight process.

Threads are created within a normal (heavyweight) process:

  • example 1 - a web browser
    • one thread for retriving data from internet
    • another thread displays text and images
  • example 2 - a word processor:
    • one thread for display
    • one for responding to key strokes
    • one for spell checking

Thread benefits

  • Ease of programming Many applications need ot perform several distinct tasks simultaentously
  • Responsivness In a web browser the user waiting for the image to download but cna sitll interact with other parts of another part of the page
  • Resource sharing threads share memoryh and resources of the process they belong to
  • Economy Threads are more economical to create and context switch as they share resources
  • Utilisation of parallel architectures In a multiprocessor architecture, where each thread may be running in parallel on a different processor, the benefits of multithereading can be increased.

Issues

The abillity to run tasks brings many benefits.

We we shall see, it also introduces a number of challenges for the programmer.

Introduction to Concurrent Programming

It is possible to run sections of code at the same time in a multiprocessing system.

You need to identify what can be run concurrently, in parallel.

Java Threads

When a java program starts up, a single thread is always created for the program.

The JVM also has its own threads for garbage collection, screen updates, event handling etc.

You can make a thread in java with like so:

Example:

class TwoChar extends Thread {
    private char out1, out2;

    public TwoChar(char first, char second){
        out1 = first; out2 = second;
    }
    public void run() {
        System.out.println(out1);
        System.out.println(out2);
    }
}

public class ThreadEx {
    public static void main(String args[]){
        // thread declartion
        TwoChar let = new TwoChar('A', 'B');
        TwoChar dig = new TwoChar ('1', '2');

        let.start(); dig.start();
    }
}

Problem

So we gave an object called “thing” gthat has a method in it that just increments:

public void inc() {
    count = count + 1;
}

Let’s assume that count is initially zero and we have two threads, T1 and T2 and they both call the increment method.

Indeterminacy means we cannot determine the outcome.

Mutual Exclusion

Indeterminacy arises because of possible simultaenous access to a share resoruce.

Solution is to only allow one thread to access a variable.

When a thread / process accesses any shared resource, it is in a critical section.

One way to enforce mutual exclusion is via semaphores.

Semaphores

A semaphore is an integer-valued variable that is used as a flag to signal when a resource is free and can be accessed

Only two operations possible, Wait and signal. Also called p and v, test and increment, down and up.

Wait() signal ()

Wait and signal are indisible. When one thread / process is executing the code in square brackets, no other may do the same.

They can also be used to enforce mutual exclusion.

There situations where you may want more than 2 threads to get access to a resource.

Syncrhonisation problems

There are a number of famous problems that charecterise the general issue of concuerrency control.

One of them is called

Producer-Consumer problem

Syncrhonisation: The producer-consumer problem

  • definition
  • java
  • issues

A producer process (eg secretary) and a consumer process commmunicates via a buffer.

The secretary puts the letter in a letter tray and the manager picks up and processes that lecture.

Both running in parallel

  • Producer cycle
    • produce item
    • deposit in buffer
  • consumer cycle
    • extract item from buffer
    • consume item

these two cycles happen in parallel

May have many producers and many consumers

Problems to solve

We have to ensure that:

  • producer cannot put items in buffer if it is fully
  • Consumer cannot extract items from buffer if it is empty
  • Buffer is not accessed by two threads simultaenously

Question

t1 t2
wait(s) signal(s)
critical reigion critical region
signal(s) wait(s)

If there was no wait signals at all it wouldnt’ crash, you would just get indeterminacy.

Answer

Let’s suppose t1 gets there first, the semaphore is 1.

wait(s) waits for semaphore to be raised, so it is and it enters the critical region so it stops the wait

let’s suppose t2 gets there, what it should do is wait for a lowered flag but it’s not, it’s raising the flag so t1 and t2 simultaneously access the shared resource.

Let’s suppose t2 gets there first, it’s raising an already raised flag and s would go to 2, it’s still a raised flag. It’ll enter its critical region.

T1 gets to its wait operation, sees the flag is raised so it goes to its critical region.

Again t1 and t2 are both inside critical regions.

A producer will wait while the buffer is full, a consumer will wait while it is empty. This is called spinlock or busy-waiting.

Dining Philosophers Problem

The producer-consumer problem models a synchronization environment which processes with dinstinct roles have to coordinate with each other.

General Setting

n “philosphers” and all they do is eat-think-eat-think all day long.

Philsophers need nothing in order to think, but in order to eat a philsopher needs 2 items of cutlery (2 chopsticks).

Only n chopsticks, however, are provided.

This means that if a philsopher is hungry BUT either neighbour is eating thne he / she must wait until both chopsticks are available.

If there are 5 philsophers then there are 5 chopsticks. So they have to share, becasue you need 2 chopsticks to eat.

Each philsopher has a unique index value {0, 1, 2, 3, 4}.

The philospher with index K must be able to use both chopsticks

k and (k+1) mod 5 chopsticks

deadlock occurrs in a concurrenct system when each participant is waiting on others to do something. When EVERY process in that set is waiting for another member of the set to do something then you’ve got deadlock.

Starvation means one or more of the participants in a concurrent system is denied access resources.

Deadlock

When two trains approach each other at a crossing,both shall come to a full stop and neither shall start up again until the other has gone.

Deadlock happens when every process is waiting on a process to finish.

Resource Allocation

Resource allocation graphs

  • no cycles in this graph mean there is no deadlock
  • diagraph

quest: how many resources in r4? slide 7 comp124-18

r boxes are type of resource

dots in boxes are number of that type of resource

p circles are processes

in general, a cycle indicates there may be deadlock.

Ways to deal with deadlock

Most operating systems ignore deadlock because it is costly to deal with.

Categories:

Updated: