Alguém pode explicar em detalhes o que “set -m” faz?

14

Na página de manual , diz apenas:

-m Job control is enabled.

Mas o que isso realmente significa?

Eu encontrei esse comando em uma pergunta SO , tenho o mesmo problema que o OP , que é "fabric não pode iniciar o tomcat". E set -m resolveu isso. O OP explicou um pouco, mas eu não entendo muito bem:

The issue was in background tasks as they will be killed when the command ends.

The solution is simple: just add "set -m;" prefix before command.

    
por laike9m 16.04.2015 / 16:27

2 respostas

8

Citando a documentação do bash:

JOB CONTROL
       Job  control  refers to  the  ability  to selectively  stop
       (suspend) the execution of  processes and continue (resume)
       their execution at a later point.  A user typically employs
       this facility via an interactive interface supplied jointly
       by the operating system kernel's terminal driver and bash.

Então, simplesmente disse: ter set -m (o padrão para shells interativos) permite que um use built-ins, como fg e bg , que seria desativado em set +m (o padrão para shells não interativos).

Não é óbvio para mim qual é a conexão entre o controle de tarefas e matando processos de fundo na saída, no entanto, mas eu posso confirmar que existe um: executando set -m; (sleep 10 ; touch control-on) & criar o arquivo se um sai do shell logo depois de digitar esse comando, mas set +m; (sleep 10 ; touch control-off) & não vai.

Acho que a resposta está no restante da documentação de set -m :

-m      Monitor  mode. [...]                     Background pro‐
        cesses run in a separate process group and a  line  con‐
        taining  their exit status is printed upon their comple‐
        tion.

Isso significa que as tarefas em segundo plano iniciadas em set +m não são reais "processos em segundo plano" ("Processos em segundo plano são aqueles cujo processo ID do grupo difere do terminal "): eles compartilham o mesmo processo ID do grupo como o shell que os iniciou, em vez de ter seus próprios grupo de processos, como processos de fundo adequados. Isso explica o comportamento observado quando a casca sai antes de alguns de seus antecedentes jobs: se bem entendi, ao sair, um sinal é enviado para os processos no mesmo grupo de processos que a casca (matando trabalhos em segundo plano iniciados em set +m ), mas não para os de outros grupos de processos (deixando, assim, apenas os verdadeiros processos em segundo plano iniciados sob set -m ).

Portanto, no seu caso, o script startup.sh presumivelmente inicia um trabalho de fundo. Quando esse script é executado de forma não interativa, como sobre SSH como na pergunta que você vinculou, o controle de trabalho está desabilitado, o job "background" compartilha o grupo de processos do shell remoto e é assim morto assim que a shell sai. Por outro lado, ativando o trabalho controle nesse shell, o job em segundo plano adquire seu próprio processo grupo, e não é morto quando seu shell pai sai.

    
por 16.04.2015 / 16:34
1

Eu encontrei isso na lista de problemas do github, e acho que isso realmente responde à sua pergunta.

It's not really a SSH problem, it's more the subtle behaviour around BASH non-interactive/interactive modes and signal propagation to process groups.

Following is based on https://stackoverflow.com/questions/14679178/why-does-ssh-wait-for-my-subshells-without-t-and-kill-them-with-t/14866774#14866774 and http://www.itp.uzh.ch/~dpotter/howto/daemonize, with some assumptions not fully validated, but tests about how this works seem to confirm.

pty/tty = false

The bash shell launched connects to the stdout/stderr/stdin of the started process and is kept running until there is nothing attached to the sockets and it's children have exited. A good deamon process will ensure it doesn't wait for it's children to exit, fork a child process and then exit. When in this mode no SIGHUP will be sent to the child process by SSH. I believe this will work correctly for most scripts executing a process that handles deamonizing itself and doesn't need to be backgrounded. Where init scripts use '&' to background a process then it's likely that the main problem will be whether the backgrounded process ever attempts to read from stdin since that will trigger a SIGHUP if the session has been terminated.

pty/tty = true*

If the init script backgrounds the process started, the parent BASH shell will return an exit code to the SSH connection, which will in turn look to exit immediately since it isn't waiting on a child process to terminate and isn't blocked on stdout/stderr/stdin. This will cause a SIGHUP to be sent to the parent bash shell process group, which since job control is disabled in non-interactive mode in bash, will include the child processes just launched. Where a daemon process explicitly starts a new process session when forking or in the forked process then it or it's children won't receive the SIGHUP from the BASH parent process exiting. Note this is different from suspended jobs which will see a SIGTERM. I suspect the problems around this only working sometimes has to do with a slight race condition. If you look at the standard approach to deamonizing - http://www.itp.uzh.ch/~dpotter/howto/daemonize, you'll see that in the code the new session is created by the forked process which may not be run before the parent exits, thus resulting the random sucess/failure behaviour mentioned above. A sleep statement will allow enough time for the forked process to have created a new session, which is why it works for some cases.

pty/tty = true and job control is explicitly enabled in bash

SSH won't connect to the stdout/stderr/stdin of the bash shell or any launched child processes, which will mean it will exit as soon as the parent bash shell started finished executing the requested commands. In this case, with job control explicitly enabled, any processes launched by the bash shell with '&' to background them will be placed into a separate session immediately and will not receive the SIGHUP signal when the the parent process to the BASH session exits (SSH connection in this case).

What's needed to fix

I think the solutions just need to be explicitly mentioned in the run/sudo operations documentation as a special case when working with background processes/services. Basically either use 'pty=false', or where that is not possible, explicitly enable job control as the first command, and the behaviour will be correct.

De link

    
por 15.05.2016 / 17:09