1:59 AM Posted In Edit This 5 Comments »


RESOURCE ALLOCATION GRAPH


Resource Allocation Graph
• Deadlock can be described through a resource allocation
graph.
• The RAG consists of a set of vertices P={P1,P2 ,…,P n} of
processes and R={R1,R2,…,Rm} of resources.
• A directed edge from a processes to a resource, Pi->R j, implies
that Pi has requested Rj.
• A directed edge from a resource to a process, Rj->Pi, implies
that Rj has been allocated by Pi.
• If the graph has no cycles, deadlock cannot exist. If the graph
has a cycle, deadlock may exist.





1.How would you know if theres a deadlock base on the resource
allocation graph?
=




Example of a Resource Allocation Graph





















-REQUEST: if process Pi has no outstanding request, it can request simultaneously any number (up to multiplicity) of resources R1, R2, ..Rm. The request is represented by adding appropriate requests edges to the RAG of the current state.
-ACQUISITION: if process Pi has outstanding requests and they can all be simultaneously satisfied, then the request edges of these requests are replaced by assignment edges in the RAG of the current state
-RELEASE: if process Pi has no outstanding request then it can release any of the resources it is holding, and remove the corrisponding assignment edges from the RAG of the current state.
























-Process
-Resource Type with 4 instances
-Pi requests instance of Rj
-Pi is holding an instance of Rj



Resource Allocation Graph With A Deadlock


















• Deadlock Prevention & Avoidance: Ensurethat the system will never enter a deadlockstate
• Deadlock Detection & Recovery: Detect that adeadlock has occurred and recover
• Deadlock Ignorance: Pretend that deadlockswill never occur


Resource Allocation Graph With A Cycle But No Deadlock




















Safe, Unsafe , Deadlock State




















Basic Facts



-If graph contains no cycles Þ no deadlock.
-If graph contains a cycle Þ
-if only one instance per resource type, then deadlock.
-if several instances per resource type, possibility of deadlock.

Resource-Allocation Graph For Deadlock Avoidance






















-Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.
-The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.
-Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.


Unsafe State In Resource-Allocation Graph

























Traffic Deadlock for Exercise




















Resource-Allocation Graph and Wait-for Graph


Deadlock Detection

3:54 AM Posted In Edit This 0 Comments »
„ Allow system to enter deadlock state

„ Detection algorithm

„ Recovery scheme

Deadlock Recovery

3:47 AM Posted In Edit This 1 Comment »
Recovery from Deadlock

• Recovery through preemption

– take a resource from some other process

– depends on nature of the resource

• Recovery through rollback

– checkpoint a process state periodically

– rollback a process to its checkpoint state if it is found deadlocked

• Recovery through killing processes

– kill one or more of the processes in the deadlock cycle

– the other processes get its resources

• In which order should we choose process to kill?

Deadlock Prevention

3:46 AM Posted In Edit This 0 Comments »
Attacking the Mutual Exclusion Condition:

• Some devices (such as printer) can be spooled

– only the printer daemon uses printer resource

– thus deadlock for printer eliminated

• Not all devices can be spooledRestrain the ways requests can be made to break one of the four necessary conditions for deadlocks.

Methods for Handling Deadlocks

3:45 AM Posted In Edit This 0 Comments »
• Ignore the problem and pretend that deadlocks would never occur

• Ensure that the system will never enter a deadlock state (prevention or avoidance)

• Allow the system to enter a deadlock state and then detect/recover

Deadlock Characterization

3:37 AM Posted In Edit This 2 Comments »

Deadlock CharacterizationDeadlock can arise if four conditions hold simultaneously:
Mutual exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes
No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task
Circular wait: there exists a set {P0, P1, …, Pn, P0} of waiting processes such that
– P0 is waiting for a resource that is held by P1,
– P1 is waiting for a resource that is held by P2,
– …,
– Pn–1 is waiting for a resource that is held by Pn,
– and Pn is waiting for a resource that is held by P0.

4:26 AM Posted In Edit This 0 Comments »
real-time scheduling

3:57 AM Posted In Edit This 0 Comments »
Thread Scheduling


An application can be implemented as a set of threads that cooperate and execute concurrently in the same address space. Criteria: When related threads run in parallel perf. improves.

Load sharing: pool of threads, pool of processors.

Gang scheduling: Bunch of related threads scheduled together.

Dedicated processor assignment: Each program gets as many processors as there are parallel threads.

Dynamic scheduling: More like demand scheduling.



CPU SCHEDULING ALGORITMS

11:16 PM Posted In Edit This 0 Comments »
Scheduling Algorithms

1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling

First-come, First-served(FCFS) scheduling
-is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

Shortest-job-first (SJF) scheduling
-is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general

Comments: SJF is proven optimal only when all jobs are available simultaneously.
Problem: SJF minimizes the average wait time because it services small processes before it services large ones. While it minimizes average wiat time, it may penalize processes with high service time requests. If the ready list is saturated, then processes with large service times tend to be left in the ready list while small processes receive service. In extreme case, where the system has little idle time, processes with large service times will never be served. This total starvation of large processes may be a serious liability of this algorithm.
Solution: Multi-Level Feedback Queques

Multi-Level Feedback Queue
Several queues arranged in some priority order.
Each queue could have a different scheduling discipline/ time quantum.
Lower quanta for higher priorities generally.
Defined by:

  • # of queues
  • scheduling algo for each queue
  • when to upgrade a priority
  • when to demote

SUBSTANTIAL INFORMATION ABOUT THREAD OF ATLEAST THREE OS Posted by ELIEZER

11:00 PM Posted In Edit This 0 Comments »

Windows Server 2008

Kernel improvements are significant because the kernel provides

  • low-level operating system functions,
  • including thread scheduling,
  • interrupt and exception dispatching,
  • multiprocessor synchronization, and
  • a set of routines and basic objects that the rest of the operating system uses to implement higher-level constructs.

WINDOWS XP THREAD

Implements the one-to-one mapping
Each thread contains
-> A thread id
-> Register set
-> Separate user and kernel stacks
-> Private data storage area
The register set, stacks, and private storage area are known as the context of the threads
The primary data structures of a thread include:
-> ETHREAD (executive thread block)
-> KTHREAD (kernel thread block)
-> TEB (thread environment block)

WINDOWS NT’s Threads


- Primary thread - When a process is created, one thread is generated along with it.

This object is then scheduled on a system wide basis by the kernel to execute on a processor.
After the primary thread has started, it can create other threads that share its address space and system resources but have independent contexts, which include execution stacks and thread specific data. A thread can execute any part of a process' code, including a part currently being executed by another thread.

It is through threads, provided in the Win32 application programmer interface (API), that Windows NT allows programmers to exploit the benefits of concurrency and parallelism.

- Fiber - is NT’s smallest user-level object of execution. It executes in the context of a thread and is unknown to the operating system kernel. A thread can consist of one or more fibers as determined by the application programmer. ˝Some literature ˝[1,11] assume that there is a one-to-one mapping of userlevel objects to kernel-level objects, this is inaccurate. Windows NT does ˝provide the means for many-to-many ˝scheduling. However, NT's design is poorly documented and the application programmer is responsible for the control of fibers such as allocating memory, scheduling them on threads and preemption.