• Uncategorized

About linux : Bash-wait-command-waiting-for-more-than-1-PID-to-finish-execution

Question Detail

I recently posted a question asking if it was possible to prevent PID’s from being re-used.

So far the answer appears to be no. (Which is fine.)

However, the user Diego Torres Milano added an answer to that question, and my question here is in regards to that answer.

Diego answered,

If you are afraid of reusing PID’s, which won’t happen if you wait as
other answers explain, you can use

echo 4194303 > /proc/sys/kernel/pid_max

to decrease your fear 😉

I don’t actually understand why Diego has used the number 4194303 here, but that’s another question.

My understanding was that I had a problem with the following code:

for pid in "${PIDS[@]}"
do
    wait $pid
done

The problem being that I have multiple PIDs in an array, and that the for loop will run the wait command sequentially with each PID in the array, however I cannot predict that the processes will finish in the same order that their PIDs are stored in this array.

ie; the following could happen:

  • Start waiting for PID in array index 0
  • Process with PID in index 1 of array terminates
  • New job(s) run on system, resulting in PID which is stored in index 1 of PID array being reused for another process
  • wait terminates as PID in array index 0 exits
  • Start waiting for PID in array index 0, except this is now a different process and we have no idea what it is
  • The process which was run which re-used the PID which wait is currently waiting for never terminates. Perhaps it is the PID of a mail server or something which a system admin has started.
  • wait keeps waiting until the next serious linux bug is found and the system is rebooted or there is a power outage

Diego said:

which won’t happen if you wait as other answers explain

ie; that the situation I have described above cannot happen.

Is Diego correct?

  • If so, why can the situation I discribed above not occur?

Or is Diego not correct?

  • If so, well, then I post a new question later today…

Additional notes

It has occured to me that this question might be confusing, unless you are aware that the PID’s are PID’s of processes launched in the background. ie;

my_function &
PID="$!"
PIDS+=($PID)

Question Answer

Let’s go through your options.

Wait for all background jobs, unconditionally

for i in 1 2 3 4 5; do
    cmd &
done
wait

This has the benefit of being simple, but you can’t keep your machine busy. If you want to start new jobs as old ones complete, you can’t. You machine gets less and less utilized until all the background jobs complete, at which point you can start a new batch of jobs.

Related is the ability to wait for a subset of jobs by passing multiple arguments to wait:

unrelated_job &
for i in 1 2 3 4 5; do
  cmd & pids+=($!)
done
wait "${pids[@]}"   # Does not wait for unrelated_job, though

Wait for individual jobs in arbitrary order

for i in 1 2 3 4 5; do
   cmd & pids+=($!)
done

for pid in "${pids[@]}"; do
   wait "$pid"
   # do something when a job completes
done

This has the benefit of letting you do work after a job completes, but
still has the problem that jobs other than $pid might complete first, leaving your machine underutilized until $pid actually completes. You do, however, still get the exit status for each individual job, even if it completes before you actually wait for it.

Wait for the next job to complete (bash 4.3 or later)

for i in 1 2 3 4 5; do
   cmd & pids+=($!)
done

for pid in "${pids[@]}"; do
   wait -n
   # do something when a job completes
done

Here, you can wait until a job completes, which means you can keep your machine as busy as possible. The only problem is, you don’t necessarily know which job completed, without using jobs to get the list of active processes and comparing it to pids.

Other options?

The shell by itself is not an ideal platform for doing job distribution, which is why there are a multitude of programs designed for managing batch jobs: xargs, parallel, slurm, qsub, etc.

This is old, but the scenario presented where a deferred wait waits for some random unrelated process due to pid collision hasn’t been directly addressed.

It’s not possible at the kernel level. The way it works there is that prior to the parent process calling wait(2)¹, the child process still exists. Because the child still exists, linux will run out of pids rather than reuse it. This manifests at times with so called zombie or “defunct” processes – these are children which have exited but have yet to be “reaped” by their parent.

Now, at the shell level you don’t have to call wait(1)¹ for child processes to be reaped – bash does this automatically. I haven’t confirmed, but when you run wait $pid for a child pid which exited long ago, I would wager bash realises it has already reaped that child and returns the information immediately rather than waiting for anything.

¹ the wait(N) notation is a convention used to disambiguate between API layers – N refers to the section of the manual a command/function is located in. In this case we have:

  • wait(2): the syscall – see man 2 wait
  • wait(1): the shell command – see man 1 wait or help wait

If you want to know what lives in each manual section, try man N intro.

Starting with Bash 5.1, there is now an additional way of waiting for and handling multiple background jobs thanks to the introduction of wait -p.

Here’s an example:

#!/usr/bin/env bash
for ((i=0; i < 10; i++)); do
    secs=$((RANDOM % 10)); code=$((RANDOM % 256))
    (sleep ${secs}; exit ${code}) &
    echo "Started background job (pid: $!, sleep: ${secs}, code: ${code})"
done

while true; do
    wait -n -p pid; code=$?
    [[ -z "${pid}" ]] && break
    echo "Background job ${pid} finished with code ${code}"
done

The novelty here is that you now know exactly which one of the background jobs finished.

You may also like...

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.