Hello friends..¬†ūüôā¬†This special post is for those who want to create thread library/package of their own and don‚Äôt know from where to start. So lets begin, and see from where you should start. Threads¬†often provided in the form of¬†thread package. Such package contains operations to create and destroy threads as well as operations on synchronization variables such as¬†mutex¬†and¬†condition¬†variables. There are basically two approaches to implement thread package/library.

  1. To create a thread library that is executed entirely in user mode.
  2. To create a thread library that is executed in user mode as well as in kernel mode/ using LWP (Let the kernel aware of threads and schedule them)

Advantages of building user-level thread library:

  1. First, it is cheap to create and destroy threads. Because all thread administration is kept in the user’s address space, the price of creating a thread is primarily determined by the cost for allocating memory to set up a thread stack. Analogously, destroying a thread mainly involves freeing memory for the stack, which is no longer used. Both operations are cheap.
  1.  A second advantage of user-level threads is that switching thread context can often be done in just a few instructions. Basically, only the values of the CPU registers need to be stored and subsequently reloaded with the previously stored values of the thread to which it is being switched. There is no need to change memory maps, flush the TLB, do CPU accounting, and so on. Switching thread context is done when two threads need to synchronize, for example, when entering a section of shared data.

Drawbacks of building¬†user-level thread library: A major drawback of user-level threads is that invocation of a blocking system call will immediately block the entire process to which the thread belongs, and thus also all the other threads in that process. As threads are particularly useful to structure large applications into parts that could be logically executed at the same time. In that case, blocking on I/O should not prevent other parts to be executed in the meantime. For such applications, userlevel threads are of no help. These problems can be mostly circumvented by implementing threads in the operating system’s kernel. Unfortunately, there is a high price to pay, for every thread operation (creation, deletion, synchronization, etc.), will have to be carried out by the kernel. requiring a system call. Switching thread contexts may now become as expensive as switching process contexts. As a result, most of the performance benefits of using threads instead of processes then disappears. ¬† ¬† A solution lies in a hybrid form of user-level and kernel-level threads, generally referred to¬†as lightweight processes (LWP). An¬†LWP¬†runs in the context of a single (heavy-weight) process, and there can be several LWPs per process. In addition to having¬†LWPs, a system also offers a user-level thread package. Offering applications the usual operations for creating and destroying threads. In addition the package provides facilities for thread synchronization such as mutexes and condition variables. The important issue is that the thread package is implemented entirely in user space. In other words all operations on threads are carried out without intervention of the kernel. Combining kernel-level lightweight processes and user-level threads. The thread package can be shared by multiple¬†LWPs, as shown in figure. This means that each LWP can be running its own (user-level) thread. Multithreaded applications are constructed by creating threads, and subsequently assigning each thread to an¬†LWP. Assigning a thread to an¬†LWP¬†is normally implicit and hidden from the programmer. The combination of (user-level) threads and¬†LWPs¬†works as follows. The thread package has a single routine to schedule the next thread. When creating an LWP (which is done by means of a system call), the¬†LWP¬†is given its own stack, and is instructed to execute the scheduling routine in search of a thread to execute. If there are several¬†LWPs, then each of them executes the scheduler. The thread table, which is used to keep track of the current set of threads, is thus shared by the¬†LWPs. Protecting this table to guarantee mutually exclusive access is done by means of mutexes that are implemented entirely in user space. In other words, synchronization between¬†LWPs¬†does not require any kernel support. When an¬†LWP¬†finds a runnable thread, it switches context to that thread. Meanwhile, other¬†LWPs¬†may be looking for other runnable threads as well. If a thread needs to block on a mutex or condition variable, it does the necessary administration and eventually calls the scheduling routine. When another runnable thread has been found, a context switch is made to that thread. The beauty of all this is that the¬†LWP¬†executing the thread need not be informed: the context switch is implemented completely in user space and appears to the¬†LWP¬†as normal program code. Now let us see what happens when a thread does a blocking system call. In that case, execution changes from user mode to kernel mode but still continues in the context of the current¬†LWP. At the point where the current¬†LWP¬†can no longer continue, the operating system may decide to switch context to another¬†LWP¬†which also implies that a context switch is made back to user mode. The selected LWP will simply continue where it had previously left off. There are several advantages to using¬†LWPs¬†in combination with a user-level thread package. First, creating, destroying, and synchronizing threads is relatively cheap and involves no kernel intervention at all. Second, provided that a process has enough¬†LWPs, a blocking system call will not suspend the entire process. Third, there is no need for an application to know about the¬†LWPs. All it sees are user-level threads. Fourth,¬†LWPs¬†can be easily used in multiprocessing environments, by executing different¬†LWPs¬†on different CPUs. This multiprocessing can be hidden entirely from the application. The only drawback of lightweight processes in combination with user-level threads is that we still need to create and destroy¬†LWPs, which is just as expensive as with kernel-level threads. However, creating and destroying¬†LWPs¬†needs to be done only occasionally, and is often fully controlled by the operating system. An alternative, but similar approach to lightweight processes, is to make use of scheduler activations. The most essential difference between scheduler activations and¬†LWPs¬†is that when a thread¬†blocks on a system call, the kernel does an upcall to the thread package, effectively calling the scheduler routine to select the next runnable thread. The same procedure is repeated when a thread is unblocked. The advantage of this approach is that it saves management of¬†LWPs¬†by the kernel. However, the use of upcalls is considered less elegant, as it violates the structure of layered systems, in which calls only to the next lower-level layer are permitted.

Related Post

Leave a Reply