I like to use Python to stitch together little programs into services that run as daemons. Python makes it easy to run multiple threads with modern synchronization constructs and to run Unix commands and pipes using the “subprocess” module. However, I was having trouble cleanly killing my threads and subprocesses when I wanted to restart the daemon. Some of the reports I had read on the web warned against using multithreading, subprocesses, daemonization and signals together, but it’s all working fine for me. I thought I’d share my recipe and some of what I had learned here.
Python does not have a standard daemon module
Creating a daemon on Unix requires a “double-fork” recipe that I can only say is partly magic. The recipe cited here seems to be the most-often cited resource for how to fork a daemon. It dates from 2001.
One may wonder why this recipe hasn’t been turned into a standard Python daemon package. I can think of two reasons. One: the code is small enough that it doesn’t deserve to be a package. Two: a good package would handle many possible daemonization needs. This recipe imposes restrictions on the daemon and only works for simple cases.
If you’re will to accept its restrictions (like only the normal IN/OUT/ERR file descriptors open) then it seems to be fine. I was quite happy to keep my service simple. Stdout and stderr are the only two streams used, and the only signal handled is SIGTERM.
In the end, I chose to use the variant posted by Clark Evans here that added the daemon start/stop/restart behavior.
Python masks signals to subprocesses
This is where I got a little hung-up. My service was running subprocesses: both as simple commands and as long-running threads. I was having a hard time terminating sub-programs when I wanted my service to stop.
I found a great resource in this article:
Python masks the signals to subprocesses it starts. (Here “masks” means “does not propagate”) If you use the subprocess module to start a long-running job like this “sleep” command:
job = subprocess.Popen(["sleep", "6000"])
(se, so) = job.communicate()
and then kill the parent Python process with a SIGTERM, the child “sleep” process will keep on running. The article referenced above explains how to set a Unix “session ID” (os.setsid) and how to send a signal to a “process group” (os.killpg). I modified the technique a little bit in the interest of simplification.
Use the threading.setDaemon() flag
Python will not exit if normal threads are still running: you must shut them down explicitly. However, Python will exit if only “daemon” threads are running. A daemon thread is simply something you deem unimportant enough that it does not require an explicit shut-down step. Write a daemon thread like this:
def __init__(self, args ...):
Putting it all together
SIGTERM_SENT = False
def sigterm_handler(signum, frame):
print >>sys.stderr, "SIGTERM handler. Shutting Down."
if not SIGTERM_SENT:
SIGTERM_SENT = True
print >>sys.stderr, "Sending TERM to PG"
# set session ID to this process so we can kill group in sigterm handler
# ... run daemon threads and subprocesses with impunity.
# ... this function never returns ...
from daemonize import startstop
if __name__== "__main__":
Here’s what we came up with. (Start at the bottom.) The startstop function does the daemonization double-fork. Thus, the PID of the daemon is a grandchild of starting Python process. When main() is called, the effective PID is that of the daemon. Calling os.setsid() sets the session id to the PID of the daemon. (The PID is the one written to the pid-file).
The sigterm handler is called when a SIGTERM arrives. Its main purpose is to send SIGTERM to the process group of the session. (Recall that Python has masked the signals to its subprocess children, so we have manage sending the signals.) This step terminates the child processes. However, it also re-sends SIGTERM to the main daemon process – which we didn’t want. We use the SIGTERM_SENT flag so that on the second receipt of the TERM signal, we ignore it.
In the end, this recipe is fairly simple. Your mileage may vary, but if keep your use of subprocesses and threads simple, this might work for you too.