A simple BASH
pattern that came in handy recently:
Run
command
every time it fails until it doesn't fail.
command
while [ $? -ne 0 ]; do
command
done
Example
There was a long-running job that required export hundreds of thousands of records in small batches.
When it was initially written, it was possible to set a page offset
so that it could be resumed upon failure.
vendor-export --batch-size=250 --offset=20
I performed a dry run with production
data over the course of a week. This allowed me to implement error handling for every possible data error that might occur. And I was confident that the job would run in its entirety without failing due to any circumstance within our control.
But there was nothing practical we could do to mitigate network errors. It's definitely possible to implement a retry loop for these cases, but it wasn't practical within this system as it would affect other critical systems. Given that the process is single use, the risk presented in the work couldn't be justified.
And this is where the pattern comes in...
By adding a resume
option to the long-running command it was possible to pick up exactly where the job failed:
vendor-export --batch-size=250 --resume
Now, when it's time to actually run the job, just call this:
vendor-export --batch-size=250 --resume
while [ $? -ne 0 ]; do
vendor-export --batch-size=250 --resume
done
Then go outside and enjoy the day...
Oops... There's may be a tiny problem with this...
If you've already tried this pattern out, you may have found this already:
In some terminals,
ctrl-c
won't interrupt the process.
It's not always the case. But sometimes this can happen.
If you encounter the problem, you just need to trap ctrl-c
and exit:
function quit-it() {
exit 0
}
vendor-export --batch-size=250 --resume
while [ $? -ne 0 ]; do
trap quit-it INT
vendor-export --batch-size=250 --resume
done
Top comments (0)