Big appreciation to #ralph_nee for expanding his collection with 6 #NFTs 🙌🔥 Always great to see the journey continue! #eCash $XEC #NFTCommunity #NFTCollectors #NFTartist #NFTUniverse #CryptoMarket pic.twitter.com/6ZVVogEmj9
— NFToa (@nftoa_) October 1, 2025
In the discussion above, we encountered a common term in semaphore operations: deadlock. Simply put, a deadlock is a detrimental situation where one process holds a resource while another process waits for it, causing both processes to stall indefinitely. But what exactly is deadlock, and how can it be resolved?
1. Background
Consider a computer system with two programs: Program A controls a tape drive, and Program B controls a printer. At some point, Program A requests the printer, but it is already in use by Program B. Simultaneously, Program B requests the tape drive, which is still controlled by Program A. Each program holds a resource the other needs, and neither can proceed until the other relinquishes its resource. This situation is known as a deadlock or, in some texts, deadly embrace.
Deadlock occurs when processes wait for events that will never happen. Two or more processes are in a deadlock if each waits for an action that only another process in the set can perform.
Sometimes, the cost of preventing or correcting deadlock is higher than the cost of ignoring it, especially in systems where hardware failures are more common than software deadlocks. However, in real-time systems, ignoring deadlock can lead to chaos and render the system unusable.
An analogy for deadlock is a traffic intersection where:
- Only one lane exists.
- Cars represent processes vying for resources.
- To resolve deadlock, some cars must reverse.
- There is a risk of starvation, where some processes (cars) may never access resources.
2. Resource-Allocation Graph
A resource-allocation graph (RAG) visually represents potential deadlocks:
- G = (V, E): Graph G consists of nodes (V) and edges (E).
- Nodes represent processes ({P_1, P_2, P_3, \dots}) and resources ({R_1, R_2, \dots}).
- Edges represent requests ((P_i \to R_j)) or allocations ((R_i \to P_j)).
- Processes are depicted as circles, resources as squares, and instances of resources as dots.
Conditions:
- If the graph contains no cycles, deadlock does not exist.
- If a cycle forms:
- Multiple instances per resource: Deadlock may exist.
- One instance per resource: Deadlock definitely exists.
3. System Model (Coffman’s Conditions)
According to Coffman (in "Operating System"), four conditions must be met for deadlock to occur:
- Mutual Exclusion: Only one process can use a resource at a time.
- Hold and Wait: Processes holding resources can request additional resources.
- No Pre-emption: Resources cannot be forcibly taken from a process.
- Circular Wait: A cycle exists where each process waits for a resource held by the next.
4. Deadlock Handling Strategies
Deadlock handling can be categorized into three main approaches:
- Ignore Deadlock: Adopt the Ostrich Strategy—pretend it doesn’t exist. This is common in UNIX and MINIX.
- Prevent Deadlock: Ensure at least one of Coffman’s conditions is never met.
- Allow Deadlock but Recover: Detect when deadlock occurs and recover by reclaiming resources.
5. Deadlock Prevention
Preventing deadlock involves breaking one of Coffman's conditions:
- Mutual Exclusion: Use spooling to queue jobs instead of dedicating resources. However, this is impractical for certain resources like process tables.
- Hold and Wait: Require processes to request all resources at the beginning, reducing efficiency.
- No Pre-emption: Pre-empt resources, though this often produces unsatisfactory results.
- Circular Wait: Impose a strict ordering on resource requests to prevent cycles.
| Condition | Approach | Weakness |
|---|---|---|
| Mutual Exclusion | Resource spooling | Can cause chaos due to resource sharing |
| Hold and Wait | Request all resources upfront | Inefficient and hard to predict needs |
| No Pre-emption | Pre-empt resources | Leads to incomplete or incorrect processing |
| Circular Wait | Order resource requests | Difficult to establish a universally fair order |
6. Deadlock Avoidance
Avoidance involves granting resource requests only if they do not lead to a potential deadlock. The system checks if fulfilling the request will maintain a safe state. If not, the process is delayed until resources are released, ensuring the system avoids reaching an unsafe state.
In summary, deadlock handling requires a careful balance between performance, complexity, and reliability, with each approach offering trade-offs suited to different system needs.
7. Safe State
A state is considered a safe state if no deadlock occurs and there is a way to fulfill all pending resource requests without resulting in a deadlock, by following a specific order.8. Unsafe State
A state is deemed unsafe if there is no way to fulfill all currently pending requests by executing processes in any order.
Figure 3-17. Safe
9. Banker’s Algorithm
This scheduling algorithm was introduced by Dijkstra (1965) and is known as the Banker’s Algorithm. It models a small town with a bank as the operating system, loans as resources, and borrowers as processes requiring resources. Deadlock occurs if a borrower who hasn’t returned their loan wants to borrow again, while the same money is needed by another borrower who also hasn’t returned their loan.Some weaknesses of the Banker’s Algorithm, as noted by Tanenbaum (1992), Stallings (1995), and Deitel (1990), include:
- Difficulty in knowing all resources a process will need at the start.
- A variable and changing number of processes.
- Resources that were available may become unavailable.
- Processes must not be constrained by inter-process synchronization needs.
- The algorithm requires granting all requests within a finite time.
10. Deadlock Detection and Recovery
This method uses techniques to determine whether a deadlock has occurred and identifies the processes and resources involved. Once a deadlock is detected, recovery can begin by reclaiming resources needed by the processes. Methods for reclaiming resources include process termination and process preemption. This approach is widely used in large mainframe systems.11. Process Termination
This method removes processes involved in a deadlock based on certain conditions:- Terminating all processes involved (this is costly).
- Terminating one process at a time until the deadlock is resolved (time-consuming).
- Terminating processes based on priority, execution time, time to complete, or rollback depth.
12. Resource Preemption
This method focuses on halting processes and resources to prevent getting stuck in an unsafe condition. Steps include:- Selecting a process and resource to preempt.
- Rolling back to a previously safe state.
- Ensuring no process falls into starvation due to this method.
13. Conclusion
Various software solutions can be used to address critical section problems, but they may not handle more complex issues effectively. Semaphores, particularly when supported by hardware, provide better synchronization solutions.Classic synchronization problems are essential for testing new synchronization schemes. Monitors, at the highest level, help coordinate the activities of multiple threads accessing synchronized data.
Deadlock can occur when two or more processes try to access resources already in use by other processes. There are three main approaches to dealing with deadlocks:
- Ensuring deadlock never occurs.
- Allowing deadlock to happen and then recovering.
- Ignoring deadlocks altogether.
These approaches lead to four methods:
- Deadlock Prevention
- Deadlock Avoidance
- Deadlock Detection
- Deadlock Recovery
Most modern operating systems opt to ignore deadlocks altogether. Silberschatz (1994) proposed a unified strategy to handle deadlocks that can be adapted to various situations:
- Group resources into different classes.
- Use linear ordering to prevent circular wait conditions, avoiding deadlocks among resource classes.
- Use the most suitable algorithm for each resource class.
14. Exercises
Processes may request combinations of the following resources: CD-ROM, soundcard, and floppy disk. Describe three deadlock prevention schemes that eliminate:
- Hold and Wait
- Circular Wait
- No Preemption
Assume Process P0 holds resources R2 and R3 and requests R4; Process P1 holds R4 and requests R1; Process P2 holds R1 and requests R3. Draw the Wait-for Graph. Is the system in deadlock? If yes, indicate the processes causing the deadlock. If not, show the process completion order.
User X has used 7 printers and needs 10 total. User Y has used 1 printer and will need a maximum of 4. User Z has used 2 printers and will need up to 4. Each user requests 1 printer. To whom should the OS grant the printer to maintain a safe sequence?
Which statements about deadlocks are true:
- Deadlock prevention is harder to implement than deadlock avoidance.
- Deadlock detection is chosen because it optimizes resource utilization.
- One prerequisite for deadlock detection is Hold and Wait.
- The Banker's Algorithm cannot avoid deadlocks.
- An unsafe state indicates a deadlock has occurred.
User 1 is using
xprinters and needsntotal. The conditions are:y < -12, n < -12, x < -y, m < -n.The state is safe if and only if:
x + n < -12andy + m < -12andx + m < -12.x + n < -12andy + m < 12andx + m < -12.x + n < -12ory + m < -12andx + m < -12.x + m < -12.
All statements ensure a safe state.
Operations on Processes
In a system, processes can be executed concurrently, which means they must be created and terminated dynamically. Therefore, the operating system must provide mechanisms for process creation and termination.

Figure 2-9: Operations on Processes
- Process Creation
A process can create several new processes during its execution through system calls. The process that initiates this creation is called the parent process, and the newly created processes are its children. Each new process can create additional processes, forming a process tree (see Figure 2-7).
Generally, a process requires certain resources (CPU time, memory, files, I/O devices) to complete its tasks. When a process creates a subprocess, the subprocess can either obtain resources directly from the operating system or share some of the parent's resources. Limiting a child process to a subset of the parent's resources prevents the system from being overwhelmed by too many subprocesses.
In addition to physical and logical resources, a parent process can pass initial data (input) to the child process. For example, if a process's function is to display the status of a file (say F1) on a terminal, it would receive the file name as input from its parent process. It uses this input to execute its task. Some operating systems allow the parent to pass open file descriptors to the child, enabling data transfer between the files without the need for additional resources.
When a process creates a new process, two execution possibilities exist:
- The parent continues executing concurrently with its child.
- The parent waits for the child to terminate before resuming.
Additionally, there are two possibilities regarding the new process’s address space:
- The child process is a duplicate of the parent process.
- The child process contains a new program loaded into its address space.
For example, in UNIX, each process is identified by a unique process identifier (PID). A new process is created using the fork() system call, which creates a copy of the parent process's address space. Both the parent and child processes resume execution after the fork() call, but the return value is 0 for the child and the child's PID for the parent.
Typically, the execlp() system call is used after fork() to replace the process's memory space with a new program. The execlp() system call loads a binary file into memory and starts its execution. This allows the parent and child to communicate and then proceed independently. The parent can either create more children or wait for the child to finish using the wait() system call. A C program demonstrating this is shown in Figure 2-10.
In contrast, the DEC VMS operating system creates a new process by loading a specific program into it and starting its execution. Microsoft Windows NT supports both models: duplicating the parent’s address space or loading a new program into the new process's address space.
- Process Termination
A process terminates when it finishes executing its final statement and requests the operating system to delete it using theexit()system call. At this point, the process may return data (output) to its parent process through thewait()system call. All resources used by the process, including memory and I/O, are reclaimed by the operating system.
In some cases, a process may terminate another process using an appropriate system call (e.g., abort). Usually, only the parent can terminate its child processes to prevent arbitrary process termination. When a process creates a new process, the child's identifier is passed to the parent.
A parent may terminate a child for several reasons:
- The child exceeds its allocated resources.
- The task assigned to the child is no longer required, and the operating system does not allow a child to continue if its parent has terminated.
In the first case, the parent needs a mechanism to check the child's status. In many systems, including VMS, a child cannot exist if its parent has terminated. In such systems, when a process terminates, all its children are also terminated, a phenomenon known as cascading termination, typically initiated by the operating system.
To illustrate process execution and termination, consider that in UNIX, a process can terminate using the exit() system call, and the parent can wait for the child to terminate using wait(). The wait() system call returns the PID of the terminated child, allowing the parent to identify which child has terminated. If the parent terminates, the child still has a reference to the parent, which collects the child's execution statistics.
