Previous | Contents | Index |
Detaching a thread means to mark a thread for destruction as soon as it terminates. Destroying a thread means to free, or make available for reuse, the resources associated with that thread.
If a thread has terminated, then detaching that thread causes the Threads Library to destroy it immediately. If a thread is detached before it terminates, then the Threads Library frees the thread's resources after it terminates.
A thread can be detached explicitly or implicitly:
It is illegal for your program to attempt to join or detach a detached
thread. In general, you cannot perform any operation (for example,
cancelation) on a detached thread. This is because the thread ID might
have become invalid or might have been assigned to a new thread
immediately upon termination of the thread. The thread should not be
detached until no further references to it will be made.
2.3.5 Joining With a Thread
Joining with a thread means to suspend this thread's execution until another thread (the target thread) terminates. In addition, the target thread is detached after it terminates.
Join is one form of thread synchronization. It is often useful when one thread needs to wait for another and possibly retrieve a single return value. (The value may be a pointer, for example to heap storage.) There is nothing special about join, though---similar results, or infinite variations, can be achieved by use of a mutex and condition variable.
A thread joins with another thread by calling the pthread_join() routine and specifying the thread identifier of the thread. If the target thread has already terminated, then this thread does not wait.
By default, the target thread of a join operation is created with the
detachstate attribute of its thread attributes object set to
PTHREAD_CREATE_JOINABLE
. It should not be created with the detachstate attribute set to
PTHREAD_CREATE_DETACHED
.
Keep in mind these restrictions about joining with a thread:
Scheduling means to evaluate and change the states of the process' threads. As your multithreaded program runs, the Threads Library detects whether each thread is ready to execute, is waiting for a synchronization object, or has terminated, and so on.
Also, for each thread, the Threads Library regularly checks whether that thread's scheduling priority and scheduling policy, when compared with those of the process' other threads, entail forcing a change in that thread's state. Remember that scheduling priority specifies the "precedence" of a thread in the application. Scheduling policy provides a mechanism to control how the Threads Library interprets that priority as your program runs.
To understand this section, you must be familiar with the concepts presented in these sections:
A thread's scheduling priority falls within a range of values, depending on its scheduling policy. To specify the minimum or maximum scheduling priority for a thread, use the sched_get_priority_min() or sched_get_priority_max() routines---or use the appropriate nonportable symbol such as PRI_OTHER_MIN or PRI_OTHER_MAX . Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression.
For example, to specify a scheduling priority value that is midway between the minimum and maximum for the SCHED_OTHER scheduling policy, use the following expression (coded appropriately for your programming language):
pri_other_mid = ( sched_get_priority_min(SCHED_OTHER) + sched_get_priority_max(SCHED_OTHER) ) / 2 |
where pri_other_mid represents the priority value you want to set.
Avoid using literal numerical values to specify a scheduling priority
setting, because the range of priorities can change from implementation
to implementation. Values outside the specified range for each
scheduling policy might be invalid.
2.3.6.2 Effects of Scheduling Policy
To demonstrate the results of the different scheduling policies, consider the following example: A program has four threads, A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:
A | minimum |
B | middle |
C | middle |
D | maximum |
On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a symmetric multiprocessor (or SMP) system the execution behavior is completely indeterminate. Although the four threads have differing priorities, a multiprocessor system might execute two or more of these threads simultaneously.
When you design a multithreaded application that uses scheduling priorities, it is critical to remember that scheduling is not a substitute for synchronization. That is, you cannot assume that a higher-priority thread can access shared data without interference from lower-priority threads. For example, if one thread has a FIFO scheduling policy and the highest scheduling priority setting, while another has default scheduling policy and the lowest scheduling priority setting, the Threads Library might allow the two threads to run at the same time. As a corollary, on a four-processor system you also cannot assume that the four highest-priority threads are executing simultaneously at any particular moment. Refer to Section 3.1.3 for more information about using thread scheduling as thread synchronization.
The following figures demonstrate how the Threads Library schedules a set of threads on a uniprocessor based on whether each thread has the FIFO, RR, or throughput setting for its scheduling policy attribute. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing (that is, executing during the flow shown in each figure).
Figure 2-1 shows a flow with FIFO scheduling.
Figure 2-1 Flow with FIFO Scheduling
Thread D executes until it waits or terminates. Next, although thread B and thread C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes.
Figure 2-2 shows a flow with RR scheduling.
Figure 2-2 Flow with RR Scheduling
Thread D executes until it waits or terminates. Next, thread B and thread C are time sliced, because they both have the same priority. Finally, thread A executes.
Figure 2-3 shows a flow with Default scheduling.
Figure 2-3 Flow with Default Scheduling
Threads D, B, C, and A are time sliced, even though thread A has a lower priority than the others. Thread A receives less execution time than thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects thread A against indefinitely being blocked from executing.
Because low-priority threads eventually run, the default scheduling
policy protects against occurrences of thread starvation and priority
inversion, which are discussed in Section 3.5.2.
2.3.7 Canceling a Thread
Canceling a thread means requesting the termination of a target thread as soon as possible. A thread can request the cancelation of another thread or itself.
Thread cancelation is a three-stage operation:
The Threads Library implements thread cancelation using exceptions. Using the exception package, it is possible for a thread (to which a cancelation request has been delivered) explicitly to catch the thread cancelation exception ( pthread_cancel_e ) defined by the Threads Library and to perform cleanup actions accordingly. After catching this exception, the exception handler code should always reraise the exception, to avoid breaking the "contract" that cancelation leads to thread termination.
Chapter 5 describes the exception package.
2.3.7.2 Thread Return Value After Cancelation
When a thread is terminated due to cancelation, the Threads Library
writes the return value
PTHREAD_CANCELED
into the thread's thread object. This is because cancelation prevents
the thread from calling
pthread_exit()
or returning from its start routine.
2.3.7.3 Controlling Thread Cancelation
Each thread controls whether it can be canceled (that is, whether it receives requests to terminate) and how quickly it terminates after receiving the cancelation request, as follows:
A thread's cancelability state determines whether it receives a cancelation request. When created, a thread's cancelability state is enabled. If the cancelability state is disabled, the thread does not receive cancelation requests, instead, they remain pending.
If the thread's cancelability state is enabled, a thread may
use the
pthread_testcancel()
routine to request the immediate delivery of any pending cancelation
request. This routine enables the program to permit cancelation to
occur at places where it is convenient, when it might not otherwise
occur, such as very long loops, to ensure that cancelation requests are
noticed within a reasonable time.
If its cancelability state is disabled, the thread cannot be terminated by any cancelation request. This means that a thread could wait indefinitely if it does not come to a normal conclusion; therefore, exercise care if your software depends on cancelation.
A thread can use the pthread_setcancelstate() routine to change its cancelability state.
A thread can use the pthread_setcanceltype() routine to change its cancelability type, which determines whether it responds to a cancelation request only at cancelation points (synchronous cancelation) or at any point in its execution (asynchronous cancelation).
Initially, a thread's cancelability type is deferred, which means that the thread receives a cancelation request only at cancelation points---for example, during a call to the pthread_cond_wait() routine. If you set a thread's cancelability type to asynchronous, the thread can receive a cancelation request at any time.
If the cancelability state is disabled, the thread cannot be canceled regardless of the cancelability type. Setting cancelability type to deferred or asynchronous is relevant only when the thread's cancelability state is enabled. |
A cancelation point is a routine that delivers a posted cancelation request to that request's target thread.
The following routines in the pthread interface are cancelation points:
pthread_cond_timedwait()
pthread_cond_wait()
pthread_delay_np()
pthread_join()
pthread_testcancel()
The following routines in the tis interface are cancelation points:
tis_cond_wait()
tis_testcancel()
Other routines that are also cancelation points are mentioned in the operating system-specific appendixes of this guide. Refer to the following thread cancelability for system services topics:
When a cancelation request is delivered to a thread, the thread could be holding some resources, such as locked mutexes or allocated memory. Your program must release these resources before the thread terminates.
The Threads Library provides two equivalent mechanisms that can do the cleanup during cancelation, as follows:
When an application sets the cancelability type to asynchronous, cancelation may occur at any instant, even within the execution of a single instruction. Because it is impossible to predict exactly when an asynchronous cancelation request will be delivered, it is extremely difficult for a program to recover properly. For this reason, an asynchronous cancelability type should be set only within regions of code that do not need to clean up in any way, such as straight-line code or looping code that is compute-bound and that makes no calls and allocates no resources.
While a thread's cancelability type is asynchronous, it should not call any routine unless that routine is explicitly documented as "safe for asynchronous cancelation." In particular, you can never use asynchronous cancelability type in code that allocates or frees memory, or that locks or unlocks mutexes---because the cleanup code cannot reliably determine the state of the resource.
In general, you should expect that no run-time library routine is safe for asynchronous cancelation, unless explicitly documented to the contrary. Only three routines are safe for asynchronous cancelation: pthread_setcanceltype() , pthread_setcancelstate() and pthread_cancel() . |
For additional information about accomplishing asynchronous cancelation
for your platform, see Section A.4 and Section B.9.
2.3.7.7 Example of Thread Cancelation Code
Example 2-1 shows a thread control and cancelation example.
Example 2-1 pthread Cancel |
---|
/* * Pthread Cancel Example */ /* * Outermost cancelation state */ { . . . int s, outer_c_s, inner_c_s; . . . /* Disable cancelation, saving the previous setting. */ s = pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, &outer_c_s); if(s == EINVAL) printf("Invalid Argument!\n"); else if(s == 0) . . . /* Now cancelation is disabled. */ . . . /* Enable cancelation. */ { . . . s = pthread_setcancelstate (PTHREAD_CANCEL_ENABLE, &inner_c_s); if(s == 0) . . . /* Now cancelation is enabled. */ . . . /* Enable asynchronous cancelation this time. */ { . . . /* Enable asynchronous cancelation. */ int outerasync_c_s, innerasync_c_s; . . . s = pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, &outerasync_c_s); if(s == 0) . . . /* Now asynchronous cancelation is enabled. */ . . . /* Now restore the previous cancelation state (by * reinstating original asynchronous type cancel). */ s = pthread_setcanceltype (outerasync_c_s, &innerasync_c_s); if(s == 0) . . . /* Now asynchronous cancelation is disabled, * but synchronous cancelation is still enabled. */ } . . . } . . . /* Restore to original cancelation state. */ s = pthread_setcancelstate (outer_c_s, &inner_c_s); if(s == 0) . . . /* The original (outermost) cancelation state is now reinstated. */ } |
2.4 Synchronization Objects
In a multithreaded program, you must use synchronization objects
whenever there is a possibility of conflict in accessing shared data.
The following sections discuss three kinds of synchronization objects:
mutexes, condition variables, and read-write locks.
Previous | Next | Contents | Index |