Recently, @Joseinnewworld added 13 more #NFTs to his collection 🔥 It’s a reminder that every step, big or small, builds the ecosystem stronger. 💪 What we build today becomes tomorrow’s value #eCash $XEC #NFTCommunity #NFTCollection #NFTArt #NFTCollectors #NFTWorld #CryptoHelp pic.twitter.com/Pn5wla1BNh
— NFToa (@nftoa_) August 30, 2025
A thread is a program flow controller. To make it easier, think of a thread as a process that will be executed in a particular program. A thread is a part of a program that is independent of other parts and can run together. This means that a thread can be stopped or rested without having to stop the others. In Java, each thread is controlled by a unique Thread instance object, which is defined in the java.lang package.

Getting to know MULTI-THREADING
When a Java program is run, there is actually one thread that is automatically created. This thread is usually called the main thread. This thread is the parent of other threads. Although this main thread is run automatically, we can control it through the Thread object by calling the currentThread() method. Consider the following example.
Example 8.19. Main thread
class ThreadUtama {
public static void main(String[] args) throws InterruptedException {
// mendapatkan thread yang sedang aktif
Thread tUtama = Thread.currentThread();
// menampilkan informasi tentang thread
System.out.print("Informasi thread: ");
System.out.println(tUtama.toString());
for (int i=0; i<5; i++) {
System.out.println("Detik ke-" + (i+1));
Thread.sleep(1000); // membuat delay selama 1 detik
}
}
}In the example above, we define a thread with the name tUtama. We use this variable to capture the main thread running in the program with the Thread.currentThread() command. Then we display information about this thread on the screen. In the line that starts with for, we will use the command to control the running thread. We use the sleep method to control the thread to delay work for 1 second each time it is repeated. Try typing the code above, then run it. Then try deleting the Thread.sleep(1000); line. Run the program again. What's different?
2. Thread Creation & Usage
Threads can be created in two ways, namely by creating a new class that implements the Runnable interface and creating a new class by deriving from the Thread class. Both methods require the java.lang package. By default this package has been automatically imported when we create a program with Java.
In this section we will only discuss the first method. While the second method will be studied simultaneously in the multi-thread section. Consider the following example.
Example 8.20. Creating a thread with the Runnable interface
class TestRunnable implements Runnable {
// mengimplementasikan method run() yang dideklarasikan
// di dalam interface Runnable public void run() {
System.out.println("Thread anak dieksekusi");
}
}
class PenerapanRunnable {
public static void main(String[] args) {
// (LANGKAH KE-1): membuat objek Runnable
TestRunnable obj = new TestRunnable();
// (LANGKAH KE-2): membuat objek Thread dengan melewatkan objek Runnable Thread t = new Thread(obj);
// (LANGKAH KE-3) : menjalankan thread
t.start();
System.out.println("Thread utama dieksekusi");
}
}In the example above, we first create a TestRunnable class that implements Runnable (note the line TestRunnable class implements Runnable and the code block below it). Then we create a TestRunnable object from that class (see the line TestRunnable obj = new TestRunnable() ). We use this object to create a new thread by using the thread class constructor (see the line Thread t = new Thread(obj)). Once created, we can run the thread (see the line t.start() ).
3. Multi-Thread
In examples 8.19 and 8.20, we are only dealing with one and two threads. But actually Java gives the possibility to create more than two threads. This condition is called Multi-thread. Try to pay attention to the following example.
Example 8.21. Creating multi-threads
class MyThread1 extends Thread { public void run() {
try {
for (int i=0; i<10; i++) {
System.out.println("Thread pertama: detik ke-" +
(i+1)); if (i != 9) { sleep(1000);
} else {
System.out.println("Thread pertama selesai...\n");
}
}
} catch (InterruptedException ie) {
System.out.println(ie.getMessage());
}
}
}
class MyThread2 extends Thread { public void run() {
try {
for (int i=0; i<5; i++) {
System.out.println("Thread kedua: detik ke-" +
(i+1));
if (i != 4) { System.out.println(); sleep(1000);
} else {
System.out.println("Thread kedua selesai...\n");
}
}
} catch (InterruptedException ie) {
System.out.println(ie.getMessage());
}
}
}
class DemoMultipleThread {
public static void main(String[] args) {
MyThread1 t1 = new MyThread1(); t1.start();
MyThread2 t2 = new MyThread2(); t2.start();
}
}The program code above shows how to create two threads. The first thread we create by creating the MyThread1 class by deriving the Thread class. This method is the second way to create a thread that we mentioned in the previous section. The second thread with the class name MyThread2 is also created in the same way. Then in the DemoMultipleThread class we create an object t1 from the MyThread1 class and an object t2 from the MyThread2 class. When run, the program code above will look like Figure 8.12.

Figure 8.12. Multi-thread execution results
Java Thread and Its Algorithm
Runnable thread scheduling by Java Virtual Machine is done with a preemptive concept and has the highest priority. In the evaluation algorithm, the criteria are determined in advance, such as utilization in terms of the waiting time used and throughput adjusted to the turnaround time.

Java Thread and Its Algorithm
1. Java Thread Scheduling
The Java Virtual Machine schedules threads using a preemptive, priority-based scheduling algorithm. All Java Threads are assigned a priority, and the Java Virtual Machine schedules Runnable threads using the highest priority during execution. If there are two or more Runnable threads that have the highest priority, the Java Virtual Machine schedules them using a FIFO queue.
1.1. Advantages of Java Thread Scheduling
- The Java Virtual Machine uses a preemptive priority based scheduling algorithm.
- All Java threads have priorities and the thread with the highest priority is scheduled for execution by the Java Virtual Machine.
- If there are two threads with the same priority, the First In First Out algorithm is used.
Another thread is executed when the following occurs:
- The thread currently executing exits the runnable state, for example, it is blocked or terminated.
- A thread with a higher priority than the currently executing thread enters the runnable state. Then the lower priority thread is suspended and replaced by the higher priority thread.
Time slicing depends on the implementation. A thread can give control to the yield() method. When a thread signals the CPU to control another thread with the same priority, the thread is scheduled to execute. Threads that give control to the CPU are called Cooperative Multitasking.
1.2. Thread Priority
Java Virtual Machine selects the runnable thread with the highest priority. All Java threads have priorities from 1 to 10. The highest priority is 10 and ends with 1 as the lowest priority. While the normal priority is 5.
- Thread.MIN_PRIORITY = thread with lowest priority.
- Thread.MAX_PRIORITY = thread with highest priority.
- Thread.NORM_PRIORITY = thread with normal priority.
When a new thread is created it has the same priority as the thread that created it. The priority of a thread can be changed using the setpriority() method.
1.3. Round-Robin Scheduling with Java
public class Scheduler extends Thread { public Scheduler() { timeSlice = DEFAULT_TIME_SLICE; queue = new Circularlist();
}
public Scheduler(int quantum) { timeSlice = quantum; queue = new Circularlist();
}
public addThread(Thread t) {
t.setPriority(2); queue.additem(t);
}
private void schedulerSleep() { try{
Thread.sleep(timeSlice );
} catch (InterruptedException e){};
}
public void run(){ Thread current; This.setpriority(6); while (true) { // get the next thread current = (Thread)qeue.getnext(); if ( current != null) && (current.isAlive())) {
current.setPriority(4); schedulerSleep(); current.setPriority(2)
}
} }
private CircularList queue; private int timeSlice; private static final int DEFAULT_TIME_SLICE = 1000;
}
public class TesScheduler{
public static void main()String args[]) {
Thread.currentThread().setpriority(Thread.Max_Priority);
Schedular CPUSchedular = new Scheduler ();
CPUSchedular.start()
TestThread t1 = new TestThread("Thread 1"); t1.start()
CpuSchedular.addThread(t1);
TestThread t2 = new TestThread("Thread 2"); t2.start()
CpuSchedular.addThread(t2);
TestThread t3 = new TestThread("Thread 1"); t3.start()
CpuSchedular.addThread(t3);
}
}2. Algorithm Evaluation
How do we choose a CPU scheduling algorithm for certain systems. The main problem is what kind of criteria are used to choose an algorithm. To choose an algorithm, the first thing we have to do is determine the size of a criterion based on:
- Maximize CPU usage below its maximum response time of 1 second.
- Maximizes throughput because turnaround time moves linearly during process execution.
2.1. Synchronization in Java
Every object in Java has a unique lock that is not used normally. When a method is declared synchronous, it is called to get the lock for that object. When the lock is held by another thread, that thread is blocked and put into the object's lock pool, for example:
public synchronized void enter(Object item) { while (count == BUFFER_SIZE) ;
Thread.yeild(); ++count; buffer[in] = item; in = (in+1) % BUFFER_SIZE;
}
public synchronized void remove (){ Object item; while (count == 0) ;
Thread.yeild(); --count; item = buffer[out] out = (out+1) % BUFFER_SIZE; return item
}2.2. Wait() and Notify() methods
The thread will call the wait() method when:
- The thread releases lock for the object.
- Thread status is blocked.
- Threads that are in wait state wait for objects.
The thread will call the notify() method when: The selected thread is taken from the threads in the wait set. By:
- Move the selected thread from the wait set to the entry set.
- Set the status of the selected thread from blocked to runnable.
2.3. Example of Wait() and Notify() Methods
public synchronized void enter(Object item){
while (count == BUFFER_SIZE) { try{ wait();
} catch (InterruptedException e) {}
}
// add an item to the buffer
++count; buffer[in] = item; in = (in+1) % BUFFER_SIZE; notify();
}
public synchronized void remove(Object item){
while (count == 0) {
try {
wait();
} catch (InterruptedException e) {}
}
// remove an item to the buffer
--count; item = buffer[out]; out = (out+1) % BUFFER_SIZE; notify(); return item;
}Thread Process Conclusion
1. Process
A process is an event of a process that can be executed. As a process execution, it requires a change of state. The state of a process can be defined by the activity of that particular process. Each process may be in one of several states, including: new, ready, running, waiting, or terminated. Each process is represented in the operating system by its process-control-block (PCB).
A process, when not executing, is placed in a common queue. There are two major classes of queues in an operating system: the I/O request queue and the ready queue. The ready queue contains all the processes that are ready to execute and that are waiting to be executed on the CPU. Each process is represented by a PCB, and these PCBs can be combined together to form a ready queue. Long-term scheduling is the selection of processes to be granted CPU access. Normally, long-term scheduling has a major impact on resource allocation, especially memory management. Short-term scheduling is the selection of a single process from the ready queue.
Processes in a system can be executed continuously. There are several reasons why processes can be executed continuously: information sharing, computational speedup, modularity, and convenience. Continuous execution provides a mechanism for process creation and deletion.
The execution of processes in an operating system may be classified into independent and cooperative processes. Cooperative processes must have some means to support communication with each other. The principle is that there are two complementary communication plans: memory sharing and messaging. The memory sharing method provides communication processes to share some variables. The processes are expected to exchange information about the users of these shared variables. In a memory sharing system, the responsibility for providing communication lies with the application programmer; the operating system must provide only memory sharing. The messaging system method allows processes to exchange messages. The responsibility for providing this communication lies with the operating system.
2. Thread
A thread is a flow of control within a process. A multithreaded process contains several different flows of control within the same address space. The advantages of multithreading include increased user responsiveness, sharing of process resources, economy, and the ability to take advantage of multiprocessor architectures. User-level threads are threads that are visible to the programmer and are not known to the kernel. User-level threads are typically managed by a thread library in user space. Kernel-level threads are supported and managed by the operating system kernel. In general, user-level threads are faster to create and manage than kernel threads. There are three different types of models that relate to user and kernel threads.
- Many to one model: maps multiple user level threads to just one kernel thread.
- One to one model: maps each user thread into one kernel thread. ends.
- Many to many model: allows developers to create as many user threads as possible, concurrency cannot be achieved because only one thread can be scheduled by the kernel at a time.
Java is unique in that it supports threads at the language level. All Java programs consist of at least a single thread control and make it easy to create control for multiple threads with the same program. JAVA also provides a library in the form of APIs for creating threads, including methods for suspending and resuming a thread, so that the thread sleeps for a certain period of time and stops the running thread. A Java thread also has four possible states, including: New, Runnable, Blocked and Dead. Different APIs for managing threads often change the state of the thread itself.
3. CPU Scheduling
CPU scheduling is the selection of a process from the ready queue to be executed. There are various algorithms used in CPU scheduling. Among them is First Come First Serve (FCFS), a simple algorithm where the process that arrives first is executed first. Another algorithm is Sorthest Job First (SJF), which is CPU scheduling where the shortest process is executed first.
The weakness of the SJF algorithm is that it cannot avoid starvation. For that, the Round Robin (RR) algorithm was created. CPU scheduling with Round Robin is dividing the process based on a certain time, namely the quantum time q. After the process runs the execution for q units of time, it will be replaced by another process. The problem is that if the quantum time is large while the process only needs a little time, it will waste time. While if the quantum time is small, it will take time when switching contexts.
FCFS scheduling is non-preemptive, meaning it cannot be interrupted before the process is fully executed. RR scheduling is preemptive, meaning it can be executed while the process is still executing. While SJF scheduling can be non-preemptive and preemptive.
Practice Questions
A. Process
1. Many to one model: maps several user level threads to only one kernel thread.
2. One to one model: maps each user thread into one kernel thread. ends.
3. Many to many model: allows developers to create as many user threads as possible, concurrency cannot be achieved because only one thread can be scheduled by the kernel at a time.
- Symmetric and asymmetric communication
- Automatic and explicit buffering
- Send by copy and send by reference
- Fixed-size and variable-sized messages
4. Explain what the kernel will do to context switches while a process is in progress?
5. Some single-user microcomputer operating systems such as MS-DOS provide little or no sense of concurrent processing. Discuss the most likely impact when concurrent processing is introduced into an operating system?
6. Show all possible states in which a process can be running, and draw a state transition diagram that explains how the process moves between states.
7. Does a process issue a disk I/O when it is in the 'ready' state, explain?
8. The kernel maintains a record for each process, called the Process Control Block (PCB). When a process is not running, the PCB contains information about the need to restart a process in the CPU. Describe two pieces of information that the PCB must contain.
9. Name five operating system activities that are examples of process management.
10. Define the differences between short term, medium term and long term scheduling.
11. Explain the actions taken by a kernel when switching context between processes.
12. What information is stored in the process table when switching context from one process to another.
13. In UNIX systems, there are many process states that can arise (transitions) due to (external) events of the OS and the process itself. What state transitions can be caused by the process itself? Mention them!
14. What are the advantages and disadvantages of:
15. Explain the differences between short-term, medium-term and long-term?
B. Thread
1. Show two programming examples of multithreading that can improve a single-threaded solution.
2. Show two programming examples of multithreading that do not improve upon a single-threaded solution.
3. Mention two differences between user level threads and kernel threads. Under what conditions is one of these threads better?
4. Describe the actions taken by a kernel during a context switch between kernel level threads.
5. What resources are used when a thread is created? How is it different from creating a process?
6. Show the actions taken by a thread library when switching context between user level threads.
C. CPU Scheduling
1. Define the difference between preemptive and nonpreemptive scheduling!
2. Explain why strict nonpreemptive scheduling is not as used on a central computer.
3. What are the advantages of using time quantum sizes at different levels of a multilevel queue system?
Questions 4 to 5 below use the following questions:
- For example, given the following processes with CPU burst lengths (in milliseconds)
- All processes are assumed to arrive at time t=0
Table 2-1. Table for questions 4 - 5
| Proses | Burst Time | Prioritas |
|--------|------------|-----------|
| P1 | 10 | 3 |
| P2 | 1 | 1 |
| P3 | 2 | 3 |
| P4 | 1 | 4 |
| P5 | 5 | 2 |1. Draw 4 Chart diagrams that illustrate the execution of these processes using FCFS, SJF, nonpreemptive priority and round robin.
2. Calculate the waiting time of each process for each scheduling algorithm.
3. Explain the differences between the following scheduling algorithms:
- FCFS
- Round Robin
- Multilevel feedback queue
7. CPU scheduling defines an execution sequence of scheduled processes. Given n processes to be scheduled on one processor, how many different scheduling possibilities are there? give the formula of n.
8. Determine the difference between preemptive and nonpreemptive (cooperative) scheduling. State why nonpreemptive scheduling cannot be used in a computer center. In nonpreemptive computer systems, better scheduling is used.
D. Glossary
- Algorithm
- FCFS Algorithm
- Scheduling algorithm
- Round Robin (RR) Algorithm
- SJF Algorithm
- Context Switch
- Queue
- Ready Queue
- CPU Burst
- CPU utilization
- Dispatcher
- Dispatch Latency
- Direct Mapping
- Idle
- Indirect Mapping
- Interruption
- I/O
- Java
- Java Thread
- Java Virtual Machine
- Kernel
- Lightweight
- Multi Programming
- Multiprocessor
- New State
- Nonpreemptive
- Scheduling
- CPU Scheduling
- Long Term CPU Scheduling
- Short Term CPU Scheduling
- FCFS (First Come First Serve) Scheduling
- Multiprocessor Scheduling
- Real Time Scheduling
- Round Robin Scheduling
- SJF (Shortest Job First) Scheduling
- Preemptive
- Process
- Real Time Computing
- Ready State
- Response Time
- Running State
- Short-Remaining Time First (SRTF)
- Synchronization
- System
- Batch System
- Operating system
- Uniprocessor System
- Soft Real Time Computing
- Starvation State
- Switching
- Symmetric Multiprocessing(SMP)
- Terminate
- Thread
- Throughput
- Time Slicing
- Time Units
- Turnaround Time
- Waiting State
- Waiting Time
E. References
NOTE: This reference site (URL) was accessed in mid-2003. It is possible that the site has now changed, or has been removed.
- Avi Silberschatz, Peter Galvin, and Greg Gagne, 2002, Applied Operationg System Concepts, 1st Ed., John Wiley & Sons, Inc.
- William Stallings, 2001, Operating Systems -- Fourth Edition, Prentice Hall.
- RM Samik-Ibrahim, 2001, Mid Test Questions 2001, Faculty of Computer Science, University of Indonesia.
- http://www.cs.ui.ac.id/kuliah/IKI20230/materi/week4/Proses.PDF
- http://www.cs.ui.ac.id/kuliah/IKI20230/materi/week4/CPU-Scheduler.PDF
- http://www.cs.nyu.edu/courses/spring02/v22.0202-002/lecture-03.html
- http://www.unet.univie.ac.at/aix/aixprggd/genprogc/understanding_threads.htm
- http://www.etnus.com/Support/docs/rel5/html/cli_guide/procs_n_threads5.html
- http://www.crackinguniversity2000.it/boooks/1575211025/ch6.htm
- http://lass.cs.umass.edu/~shenoy/courses/fall01/labs/talab2.html
- http://www.isbiel.ch/~myf/opsys1/Exercises/Chap4/Problems1.html
- http://www.cee.hw.ac.uk/courses/5nm1/Exercises/2.htm
- http://www.cs.wisc.edu/~cao/cs537/midterm-answers1.txt
Source
- Operating Systems: IKI-20230 Lecture Material
- by Joint Working Group 21--28 IKI-20230 Even Semester 2002/2003
- $Revision: 1.3.0.0 $ Edition
- Published September 30, 2003
What is a Thread?
Threads, or sometimes called lightweight processes, are the basic unit of CPU utilization. They contain a thread ID, program counter, registers, and stack. And they are shared with other threads in the same process.

Figure 2-16. Thread
1. Basic Concepts
Informally, a process is a program in execution. There are two types of processes, heavyweight processes or commonly known as traditional processes, and lightweight processes or sometimes called threads. Threads share program parts, data parts and operating system resources with other threads that refer to the same process. A thread consists of a thread ID, a program counter, a set of registers and a stack. With multiple thread controls a process can do more than one job at a time.
2. Benefits
- Response: Multithreading allows a program to continue running even when part of the program is blocked or is performing a long operation. For example, a multithreaded web browser can allow a user to interact with one thread while an image is being loaded by another thread.
- Resource sharing: By default, threads share the memory and resources of a process. The advantage of code sharing is that applications have different thread activities with the same memory allocation.
- Economical: Allocating memory and resources to create a process is very expensive. Alternatively, since threads share the resources of a process, it is more economical to create threads.
- Empowering multiprocessor architectures: The benefits of multithreading can be enhanced with multiprocessor architectures, where each thread can run in parallel on a different processor. In single-processor architectures, the CPU typically switches between threads rapidly, creating the illusion of parallelism, but in reality only one thread is running at a time.
3. User Threads

Figure 2-17. User and Kernel Threads
User threads are supported by the kernel and implemented by the user-level thread library. The library provides support for thread creation, scheduling, and management that are not supported by the kernel.
4. Kernel Threads
Kernel threads are supported directly by the operating system: thread creation, scheduling, and management are performed by the kernel in kernel space. Because thread management is performed by the operating system, kernel threads are typically slower to create and manage than user threads. However, as long as the kernel manages threads, if a thread is blocked by a system call, the kernel can schedule another thread in the application for execution. Also, in a multiprocessor environment, the kernel can schedule threads on different processors. Windows NT, Solaris, and Digital UNIX are operating systems that support kernel threads.
5. Multithreading Model
In the previous sub-chapter, the definition of thread, its advantages, levels such as user and kernel have been discussed. So in this sub-chapter, the discussion will be continued with the types of threads and examples of them both on Solaris and Java.
Many existing systems can support both user and kernel threads, so the multithreading models are also diverse. There are three common multithreading implementations that we will discuss, namely the many-to-one, one-to-one, and many-to-many models.

Figure 2-18. Multithreading Model
6. Many to One Model
This many-to-one model maps several levels of user threads to only one kernel thread. Thread process management is done by (in) user space, so it is efficient, but if a thread makes a blocking on a system call, then the entire process will stop (blocked). The weakness of this model is that multithreads cannot run or work in parallel in a multiprocessor because only one thread can access the kernel at a time.

Figure 2-19. Many to One Model
7. One to One Model
The one-to-one model maps each user thread to a single kernel thread. This makes the one-to-one model more synchronous than the many-to-one model by allowing other threads to run while a thread is blocking a system call; it also allows multiple threads to run in parallel on a multiprocessor. The disadvantage of this model is that creating a user thread requires creating a corresponding user thread. Because the process of creating a kernel thread can affect the performance of an application, most implementations of this model limit the number of threads supported by the system. The one-to-one model is implemented by Windows NT and OS/2.

Figure 2-20. One to One Model
8. Many to Many Model
Multiple user thread tiers can use a number of kernel threads that is less than or equal to the number of user threads. The number of kernel threads can be specified for multiple applications and multiple machines (an application can be allocated more than a few kernel threads in a multiprocessor than in a uniprocessor) whereas the many-to-one model allows the developer to create as many user threads as possible, concurrency cannot be achieved because only one thread can be scheduled by the kernel at a time. The one-to-one model has higher concurrency, but the developer must be careful not to create too many threads without the application and in some cases may be limited in the number of threads that can be created.

Figure 2-21. Many to Many Model
9. Threads In Solaris 2
Solaris 2 is a version of UNIX that until 1992 only supported heavyweight processes with control by a single thread. But now Solaris 2 has changed into a modern operating system that supports threads in the kernel and user levels, symmetric multiprocessing (SMP), and real-time scheduling.
Threads in Solaris 2 are equipped with a library of APIs for thread creation and management. In Solaris 2 there is also a middle level of threads. Between the user level and kernel level threads are lightweight processes (LWP). Each process contains at least one LWP. The thread library pairs several user-level threads into LWP spaces for processing, and only one user-level thread that is currently attached to an LWP can run. The rest can be blocked or wait for an LWP to be executed.
Operations in the kernel are executed entirely by standard kernel-level threads. There is one kernel-level thread for each LWP, but there are also some kernel-level threads that run in parts of the kernel without being associated with an LWP (such as a disk allocation thread). Kernel-level threads are the only objects that are scheduled into the system (see Section 2.7 on scheduling). Solaris uses a many-to-many model.
User-level threads in Solaris can be either bound or unbound. A bound user-level thread is permanently attached to an LWP. Only that thread works in the LWP, and upon request, the LWP can be forwarded to a processor. In some situations that require fast response times (such as real-time applications), binding a thread is useful. An unbound thread is not permanently attached to an LWP. All unbound threads are paired (multiplexed) into a space of LWPs available to the application. By default, all threads are unbound.
For example, when the system is running, each process can have many user-level threads. These user-level threads can be scheduled and switched between their LWPs by the thread library without any intervention from the kernel. User-level threads are very efficient because they do not require the kernel to work by the thread library to switch from one user-level thread to another.
Each LWP is associated with exactly one kernel-level thread, where each user-level thread is independent of the kernel. A process may have multiple LWPs, but they are only needed when the threads must communicate with the kernel. For example, an LWP would be needed for each thread whose concurrent block is a system call. Suppose there are five file reads coming up. So five LWPs are needed, because all of them may be waiting for kernel I/O to complete. If a process only has four LWPs, then the fifth request must wait for one of the LWPs to return from the kernel. Adding a sixth LWP would be pointless if there is only room for five processes.
Kernel threads are scheduled by the kernel scheduler and executed on the CPU or CPUs in the system. If a kernel thread blocks (for example, waiting for an I/O process to complete), the processor is free to run another kernel thread. If the blocked thread is executing part of an LWP, then that LWP is also blocked. At a higher level, user-level threads that are currently attached to that LWP are also blocked. If a process has more than one LWP, then other LWPs can be scheduled by the kernel.
Developers use the following data structures to implement threads in Solaris 2:
- A user-level thread has a thread ID, a set of registers (including a PC and stack pointer), a stack and a priority (used by the library for scheduling). All of these data structures come from user space.
- An LWP has a set of registers for the user-level threads it is running, as well as memory and logging information. The LWP is a data structure of the kernel, and resides in kernel space.
- A kernel thread has only a small data structure and a stack. Its data structure includes copies of the kernel registers, a pointer to the LWP attached to it, and priority and scheduling information.
Each process in Solaris 2 has a lot of information contained in the process control block (PCB). In general, a process in Solaris has a process id (PID), a memory map, a list of open files, a priority, and a pointer to the list of LWPs associated with the process.

Figure 2-22. Solaris and Java threads
10. Java Threads
As we have seen, threads are supported not only by the operating system but also by thread library packages. For example, the Win32 library has APIs for multithreading Windows applications, and Pthreads has thread management functions for POSIX-compliant systems. Java is unique in its language-level support for creating and managing threads.
All Java programs have at least one control thread. Even a simple Java program has only one main() method running in a single thread in the JVM. Java provides commands that support developers to create and manipulate thread controls in programs.
One way to create a thread explicitly is to create a new class that extends from the Thread class, and override the run() method of that Thread class.
Objects derived from this class will run some thread control in the JVM. However, creating an object derived from the Thread class does not specifically create a new thread, but the start() method actually creates a new thread.
Calling the start() method for a new object allocates memory and initializes a new thread in the JVM and calling the run() method makes the thread suitable for execution by the JVM. (Note: never call the run() method directly. Call the start() method and it will directly call the run() method).
When this program is executed, two threads are created by the JVM. The first one is the thread associated with the application-thread that starts executing in the main() method. The second thread is the runner thread explicitly created with the start() method. The runner thread starts its execution with the run() method.
Another option for creating a separate thread is to define a class that implements the runnable interface. The runnable interface is defined as follows:
Runnable
Public interface Runnable
{
Public abstract void run();
}So, when a class is implemented with runnable, it must define a run() method. The thread class, which is used to define static and instance methods, also implements the runnable interface. That explains why a class derived from thread must define a run() method.
Implementing the runnable interface is the same as extending the thread class, the only possibility is to replace "extends thread" with "implements runnable".
Class Worker2
Class worker2 implements Runnable
{
Public void run() {
System. Out. Println ("I am a worker thread. "); }
}Creating a thread from a class that implements runnable is different from creating a thread from a class that extends thread. As long as the new class does not extend thread, it does not have access to the static objects or instance methods --- such as the start() method --- of the thread class. However, an object of the thread class is still needed, because it is the start() method that creates a new thread from control.
In the second class, a new thread object is created through the runnable object in its constructor. When a thread is created by the start() method, the new thread starts executing in the run() method of the runnable object. These two methods of thread creation are the most commonly used.
11. Thread Management
Java provides several API facilities for managing threads, including:
- Suspend(): functions to suspend execution of a running thread.
- Sleep(): functions to put a running thread to sleep for a period of time.
- Resume(): the execution result of the suspended thread.
- Stop(): stops the execution of a thread; once a thread has been stopped it will not be started again.
Each of these different methods for controlling the state of a thread may be useful in certain situations. For example: Applets are a natural example for multithreading because they typically have graphics, animation, and audio --- all of which are great for managing separate threads. However, it would not be possible for an applet to run while it was not being displayed, if the applet was running CPU intensively. One way to handle this situation is to run the applet as a separate thread of control, suspending the thread while the applet is not being displayed and reporting to it when the applet is displayed again.
You can do this by noting that the applet's start() method is called when the applet is first displayed. If the user leaves the web page or the applet exits the display, then the applet's stop() method is called (this is an advantage because start() and stop() are both associated with threads and applets). If the user returns to the applet's web page, then the applet's start() method is called again. The applet's destroy() method is called when the applet is removed from the browser's cache. It is possible to prevent an applet from running while it is not currently displayed in a web browser by using the applet's stop() method to suspend execution and reporting the execution to the thread in the applet's start() method.
12. Thread State
A Java thread can be in one of 4 possible states:
- new: a thread in this state exists when the object of that thread is created.
- runnable: call the start() method to allocate memory for a new thread in the JVM and call the run() method to create an object.
- block: a thread will be blocked if it displays a blocking sentence. For example: sleep() or suspend().
- dead: a thread is moved to the dead state when the run() method terminates or when the stop() method is called.

Figure 2-25. Thread State
13. Threads and JVM
In addition to Java programs containing several different threads of control, here there are several threads that are running asynchronously for the benefit of JVM level system handling such as memory management and control charts. The Garbage Collector evaluates objects when the JVM sees when they are in use. If not, then it will be returned to the system memory.
14. JVM and Operating System
A typical implementation of the JVM is on top of a host operating system. This arrangement allows the JVM to hide the implementation details of the operating system and provides a consistent, abstract environment that allows Java programs to run on any operating system that supports a JVM. The specification for the JVM does not identify how Java threads are mapped to the operating system.
15. Multithreaded Solution Example
In this section, we introduce a complete multithreaded solution to the producer consumer problem using message passing. The server class first creates a mailbox to collect messages, using the message queue class. Then, separate producer and consumer threads are created, each referencing a shared mailbox. The producer thread alternates between sleeping, producing items, and putting items into the mailbox. The consumer alternates between sleeping and fetching an item from the mailbox and consuming it. Since the receive() method of the message queue class is non-blocking, the consumer must check whether the message it fetches is null.
