Java Thread Scheduler

A Java component called a thread scheduler decides what threads to run or execute and where to wait. The thread scheduler in Java will select only threads that are in a runnable state. However, if more than a single thread is running in an unsustainable condition, the thread scheduler must choose one of those and ignore all others. Some criteria determine which thread will run first. The selection of which thread to start depends on several criteria. The process of scheduling a thread is based on two factors: priority and arrival time.

What is Thread Scheduling in Java?

A Java component in the operating system determines which thread should run or get resources. Two boundary schedules are required for the scheduling of threads. Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via a lightweight process (LWP) by the application developer. To schedule kernel-level threads within an operating system scheduler to perform various special OS functions.

Light Process (LWP)

  • The thread library schedules which process thread to run on which LWP and for how long.
  • This is because when an LWP is blocked on an I/O operation, the thread library needs to create and schedule another LWP to invoke the second ULT.
  • Thus, in an I/O-bound application, the number of LWP equals the number of ULT. In a CPU-bound application, only the application matters.
  • Each LWP is attached to a separate thread at the kernel level.

Contention Scope

The word ” contention ” refers to the competition or struggle between user-level threads to access kernel resources. This control, therefore, defines the extent to which the dispute occurs.

Depending on the extent of the dispute, it is classified as:

Process Contention Scope (PCS)

Contention occurs between threads within the same process. The thread library schedules a high-priority PCS thread to access resources through available LWPs (the priority of which is determined by the application developer during thread creation).

Scope of System Contention (SCS)

  • Contention occurs between all threads in the system. In this case, each SCS thread is associated with each LWP by a thread library and is scheduled by the system scheduler to access kernel resources.
  • A group of one or more resources that a thread is vying for is known as the allocation domain.
  • Each allocation domain with one or more cores may exist in a multicore system.
  • An allocation domain may contain one or more ULTs. This control is not provided because hardware and software architectural interfaces are complicated.

However, by default, the multicore system will feature an interface that modifies a thread’s allocation domain.

Imagine an operating system with 10 user-level threads (T1 to T10) and three processes (P1, P2, and P3) with a single allocation domain. Each of the three processes will receive 100% of the CPU resources. The contention scope, scheduling policy, and priority of each thread, defined by the application developer using a thread library and the system scheduler, determine how much CPU resources are allotted to each process and thread. There is a different scope of contention for these user-level threads.

Priority

The priority for each thread is at least 1 to 10. If a thread’s priority has been raised, it indicates to the scheduler that there is an increased likelihood of your thread being preempted.

Methods of Priority

  • MAX_Priority: This function establishes the highest possible priority for the thread.
    Syntax: public static int MAX_PRIORITY
  • MIN_Priority: This function establishes an excessively low thread priority.
    Syntax: public static int MIN_PRIORITY
  • NORM_PRIORITY: Determines the thread’s default priority. After the thread has been started using the start method.
    Syntax: public static int NORM_PRIORITY
  • getPriority: This function gives back the specified thread’s priority.
    Syntax: public final int getPriority()
  • setPriority: Using this technique, the priority is set to p
    Syntax: public final void setPriority(int p)

Time of Arrival

If a pair of identical priority threads fall into an accessible state, it cannot be the factor that determines which thread is selected from those two strands. In such cases, the thread schedule shall take into account the thread’s arrival time. By contrast, the thread that has arrived is preferred over all others.

Thread Scheduler Algorithms

A Java scheduler thread follows the scheduling algorithm based on these factors.

First-come, first-served Scheduling.

To open a running queue, the scheduler will select the fastest thread based on this scheduling algorithm. More information is given in the table below:

Threads   Time of Arrival
t1 0
t2 1
t3 2
t4 3

The table below shows that thread t1 arrived first, then thread t2, then thread t3 and finally thread t4, and the order in which the threads are processed is based on the time of arrival of the thread. Therefore, the first processing will be performed on Thread t1 followed by the final processing of Thread t4.

Time-slicing scheduling

First Come First serve is usually a nonpreemptive serving algorithm, which is bad because it can lead to infinite blocks and also known as starvation. To prevent this from happening, threads are allocated a few timeslices, and the running thread must relinquish its CPU after a certain amount of time. As a result, there is more time for the other waiting threads to complete their tasks.

Preemptive-Priority Scheduling

The name of the scheduling algorithm shall indicate that it is related to a thread priority. Suppose multiple threads are available in a runnable state. The thread scheduler will choose the thread with the highest priority. Because the algorithm is also precipitous, slices of time are being provided for each thread to prevent starvation. For some time, however, even if the most crucial thread does not complete its job, it will have to relinquish the CPU due to preemption.

Working of the Java Thread Scheduler

  • Let’s have a look at how the Java thread scheduler works. Suppose five threads arrive at different times and with other priorities.
  • The thread scheduler should choose which thread to run on the CPU.
  • The thread scheduler selects the thread with the highest priority and starts executing its task.
  • When the thread is already in a runnable state and another thread with a higher priority reaches the Runnable State, the processor will skip the current thread. In contrast, the incoming higher-priority thread receives CPU time.
  • A scheduling decision is to be made using the FCFS algorithm when threads 2 and 3 have the same priorities and arrival times. Therefore, the thread that comes first will be allowed to run.

Steps to Understand the Working of Java Thread Scheduler

Step 1: There are five threads, each with a distinct arrival time and priority.

Step 2: The thread scheduler selects the thread that uses the CPU time first.

Step 3: The thread scheduler will begin the thread’s execution by choosing the thread with the highest priority. If another thread emerges with a higher priority, the thread with the highest priority will be preempted from the processor and given CPU time.

Step 4: If Thread 2 and Thread 3 have the same priority, the thread scheduler employs the FCFS (First-Come, First-Served) scheduling algorithm and assigns the processor to the thread that arrived first.

Advantages of PCS over SCS

  • If all threads are PCS, then context switching, synchronisation, and scheduling all happen within userspace. PCS is cheaper than SCS.
  • PCS threads share one or more available LWPs. A separate LWP is associated with each SCS thread. A separate KLT is created for each system call.
  • The number of KLTs and LWPs created highly depends on the number of SCS threads created. This increases the kernel’s complexity in handling scheduling and synchronization.
  • This results in an SCS threading constraint stating that the number of SCS threads should be less than that of PCS threads.
  • If a system has multiple allocation domains, scheduling and synchronising resources become more tedious.
  • Problems arise when an SCS thread is part of more than one allocation domain; the system must handle n number of interfaces.
  • The second frontier of thread scheduling involves CPU scheduling using the system scheduler.
  • The scheduler treats each kernel-level thread as a separate process, providing access to kernel resources.

Conclusion

Threads are planned to be executed based on their priority. Even though each thread is running under the runtime, the operating system allocates slices of CPU time to all threads. Different operating systems employ different scheduling algorithms to determine the order in which threads are executed.

Leave a Reply

Your email address will not be published. Required fields are marked *