Change wait_task_inactive() to check "state & match_state" instead of "state == match_state". This should not make any difference, but this allows us to add more "stopped" bits which can be set or cleared independently.
IOW. wait_task_inactive() assumes that if task->state != 0, it can only be changed to TASK_RUNNING. Currently this is true, and in this case "state & match_state" continues to work. But, unlike the current check, it also works if task->state has other bits set while the caller is only interested in, say, __TASK_TRACED.
Note: I think wait_task_inactive() should be cleanuped upstrean anyway, nowadays we have TASK_WAKING and task->state != 0 doesn't necessarily mean it is TASK_RUNNING. It also makes sense to exclude the !TASK_REPORT bits during the check. Finally, probably this patch makes sense anyway even without utrace. For example, a stopped _and_ traced thread could have task->state = TASK_STOPPED | TASK_TRACED, this can be useful.
Signed-off-by: Oleg Nesterov oleg@redhat.com --- kernel/sched.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c index ccacdbd..66ef2fb 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2289,7 +2289,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state) * is actually now running somewhere else! */ while (task_running(rq, p)) { - if (match_state && unlikely(p->state != match_state)) + if (match_state && !likely(p->state & match_state)) return 0; cpu_relax(); }