Como encerrar com segurança cada VM em execução durante a reinicialização / desligamento no Qubes OS 4.0 sem incorrer em uma parada / atraso devido a um tempo limite? (questão systemd)

2

Devido a alguma edição que está afetando o Qubes 4.0 também, ao Reiniciar ou Desligando o computador do dom0, haverá algum atraso até que a ação seja concluída, a menos que todas as VMs em execução sejam encerradas primeiro.

Eu tenho que executar manualmente um script para desligar todas as VMs antes de fazer um Restart / Shutdown do menu Logout do xfce, ou então eu posso esperar uma parada que pode durar pelo menos 30 segundos (se Reduzo DefaultTimeoutStopSec do padrão de 90s para 30s ).

Aqui está o script e uma saída de amostra dele em execução:

[ctor@dom0 ~]$ cat preshutdown 
#!/bin/bash

xl list
time qvm-shutdown --verbose --all --wait; ec="$?"
echo "exitcode: '$ec'"
time while xl list|grep -q -F '(null)'; do xl list;sleep 1; done
exit $ec

$ ./preshutdown 
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4080     6     r-----     108.6
sys-net                                      1   384     2     -b----       7.0
sys-net-dm                                   2   144     1     -b----      16.5
sys-firewall                                 3  2917     2     -b----       9.7
gmail-basedon-w-s-f-fdr28                    4  3247     2     -b----      28.6
stackexchangelogins-w-s-f-fdr28              5  3241     2     -b----      24.3
dev01-w-s-f-fdr28                            7  8481     6     -b----      32.6
2018-09-06 09:37:08,187 [MainProcess selector_events.__init__:65] asyncio: Using selector: EpollSelector

real    0m14.959s
user    0m0.065s
sys 0m0.017s
exitcode: '0'
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.0
(null)                                       1     0     1     --ps-d       7.8
(null)                                       3     0     0     --ps-d      11.0
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.1
(null)                                       1     0     1     --ps-d       7.8
(null)                                       3     0     0     --ps-d      11.0
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.4
(null)                                       1     0     1     --ps-d       7.8
(null)                                       3     0     0     --ps-d      11.0
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.7
(null)                                       1     0     1     --ps-d       7.8
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.8
(null)                                       1     0     1     --ps-d       7.8
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     123.9
(null)                                       1     0     1     --ps-d       7.8
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0  4095     6     r-----     124.0
(null)                                       1     0     1     --ps-d       7.8

real    0m7.093s
user    0m0.024s
sys 0m0.085s

No entanto, Dom0 está preso no Fedora 25 (o Fedora 28 está disponível somente para VMs) e, portanto, systemd não pode ser facilmente atualizado (ou não sei ainda) - está na versão 231 enquanto o 240 é o mais novo no github - e não tenho certeza se é um problema do systemd ou simplesmente não sei como modificar adequadamente o qubes-core.service para garantir que ele seja interrompido antes da tentativa do systemd de diminuir alguns Dispositivos DM.
Eu tentei usar este e isso responde, mas o resultado não foi alterado.

Aqui está uma amostra de systemd output quando paramos:

[ 443.660340] systemd[1]: qubes-core.service: Installed new job qubes-core.service/stop as 797
[ 443.660426] systemd[1]: dev-block-253:0.device: Installed new job dev-block-253:0.device/stop as 867
[ 533.755109] systemd[1]: dev-block-253:0.device: Job dev-block-253:0.device/stop timed out.
[ 534.047847] systemd[1]: qubes-core.service: About to execute: /usr/bin/pkill qubes-guid
[ 534.048939] systemd[1]: Stopping Qubes Dom0 startup setup...
[ 542.648718] systemd[1]: Stopped Qubes Dom0 startup setup.
[ 547.940019] systemd[1]: dev-block-253:0.device: Failed to send unit remove signal for dev-block-253:0.device: Transport endpoint is not connected

versus quando não atrapalha:

[ 67.643774] systemd[1]: dev-block-253:0.device: Installed new job dev-block-253:0.device/stop as 777
[ 67.643982] systemd[1]: qubes-core.service: Installed new job qubes-core.service/stop as 860
[   68.032308] systemd[1]: qubes-core.service: About to execute: /usr/bin/pkill qubes-guid
[ 68.033396] systemd[1]: Stopping Qubes Dom0 startup setup...
[ 76.932065] systemd[1]: Stopped Qubes Dom0 startup setup.
[ 76.985423] systemd[1]: dev-block-253:0.device: Redirecting stop request from dev-block-253:0.device to sys-devices-virtual-block-dm\x2d0.device.
[ 82.205556] systemd[1]: dev-block-253:0.device: Failed to send unit remove signal for dev-block-253:0.device: Transport endpoint is not connected

Por incrível que pareça, o no-stall e o stall acima aconteceram sem que eu mudasse nada em systemd : as 2 primeiras reinicializações foram no-stall, a 3ª foi uma parada. ( detalhes completos aqui )

Como desligar com segurança cada VM em execução durante a reinicialização / desligamento no Qubes OS 4.0? isto é, sem que eu precise executar manualmente um script antes de ir para o menu Restart / Shutdown from xfce.

Possíveis ideias:
E se todos esses dispositivos que estão no tempo limite estiverem sendo interrompidos quando o usuário fizer logout ( session-2.scope ?) Ou seja, eles são listados por systemctl --user status *.device , o que significa que eles poderiam ter precedência? então eles sempre param ANTES de qubes-core.service parar porque o último é --system . O que você acha? Aqui está o que está em systemctl --user durante a execução (logado com VMs em execução): link
> EDIT: eu tentei com um --user service , mas parece que tudo está sendo interrompido de uma só vez (ou seja, simultaneamente) para que meu script e o tempo limite acima acabem ao mesmo tempo.
EDIT : Descobri que, ou eu não sei como, não há como dizer ao systemd para parar (e terminar de parar) o meu --system service antes que o systemd tente parar alguns .device , então meu serviço e .device falham com timeout ao mesmo tempo (90 segundos depois) . Veja log aqui .

    
por Marcus Linsner 06.09.2018 / 10:13

3 respostas

1

O problema foi corrigido em qubes-gui-dom0-4.0.8-1.29.fc25 por esta mudança de código ( este confirmar)
Portanto, a solução alternativa redsparrow não é mais necessária.

Reproduzindo envie aqui:

From 612cfe5925d32d8af0269163ee3ad627de4a8226 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <[email protected]>
Date: Thu, 13 Sep 2018 12:22:19 +0200
Subject: [PATCH] xside: avoid making X11 calls in signal handler

This is very simlar fix to QubesOS/qubes-issues#1406
2148a00 "Do not make X11 requests in X11 error handler"

Since signals can be sent asynchronously at any time, it could also hit
processing another X11 message. For this reason, avoid making X11 calls
if exit() is called from signal handler.

Fixes QubesOS/qubes-issues#1581
---
 gui-daemon/xside.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/gui-daemon/xside.c b/gui-daemon/xside.c
index cca28da..3e12012 100644
--- a/gui-daemon/xside.c
+++ b/gui-daemon/xside.c
@@ -2455,6 +2455,13 @@ static void handle_message(Ghandles * g)
 /* signal handler - connected to SIGTERM */
 static void dummy_signal_handler(int UNUSED(x))
 {
+    /* The exit(0) below will call release_all_mapped_mfns (registerd with
+     * atexit(3)), which would try to release window images with XShmDetach. We
+     * can't send X11 requests if one is currently being handled. Since signals
+     * are asynchronous, we don't know that. Clean window images
+     * without calling to X11. And hope that X server will call XShmDetach
+     * internally when cleaning windows of disconnected client */
+    release_all_shm_no_x11_calls();
     exit(0);
 }

O que isto faz é permitir que qubes-guid termine com segurança (por exemplo, em SIGTERM) para que não exija o SIGKILL do redsparrow. Para o resto da informação, veja a resposta do redsparrow.

    
por 13.09.2018 / 19:30
1

Não creio que você deva usar um After=dev-block-253:0.device , pois isso só garante que o dispositivo esteja presente em /dev . Tente usar RequiresMountsFor=/my/mountpoint ( referência ), o que garantirá que o sistema de arquivos é montado (e desmontado) na hora correta

    
por 09.09.2018 / 11:15
0

Eu encontrei uma solução alternativa , mas o problema subjacente ainda existe (que *.device s são interrompidos simultaneamente com tudo o mais após a emissão de um reinício (ou desligamento) do xfce; há um log aqui incluindo o arquivo de serviço que usei, que mostra que ambos terminam ao mesmo tempo, depois dos 90s).

A solução alternativa é: ter outro serviço, redsparrow.service , que pkill -9 qubes-guid (o -9 é necessário, apenas pkill , ou seja, o SIGTERM não fará isso) no início durante a fase de reinicialização / desligamento será executado simultaneamente com as tentativas de parar o *.devices (como /dev/dm-0 ), mas antes do qubes-core.service que, de outra forma, não seria executado, se esse pkill não estivesse em vigor, até depois desses dispositivos stop iria expirar (90 segundos depois, por padrão). O qubes-core.service ainda é o que faz o encerramento de todas as VMs em execução usando o script preshutdown , mas não precisa mais se preocupar com qubes-guid .

Aqui estão os 3 arquivos:

$ ls -la ~/bin/preshutdown 
-rwxrwxr-x 1 root root 517 Sep 11 11:21 /home/ctor/bin/preshutdown

$ ls -la /usr/lib/systemd/system/qubes-core.service
-rw-r--r-- 1 root root 4339 Sep 11 11:09 /usr/lib/systemd/system/qubes-core.service

$ ls -la /usr/lib/systemd/system/redsparrow.service
-rw-r--r-- 1 root root 4434 Sep 11 11:15 /usr/lib/systemd/system/redsparrow.service

preshutdown :

#!/bin/bash

tput -Txterm-256color setab 2;echo "Starting $0 'date' 'cat /proc/uptime'";echo;tput -Txterm-256color sgr0;
xl list
time qvm-shutdown --verbose --all --wait; ec="$?"
echo "exitcode: '$ec'"
time while xl list|grep -q -F '(null)'; do xl list;sleep 1; done #this only happens if there are any qubes-guid pids active (eg. pkill -9 qubes-guid is required to never see any of these '(null)'s                                                                                                                                  
tput -Txterm-256color setab 1;echo "Ending $0 on 'date' 'cat /proc/uptime'";echo;tput -Txterm-256color sgr0;
exit $ec

qubes-core.service :

[Unit]
Description=Qubes Dom0 startup setup
After=qubes-db-dom0.service libvirtd.service xenconsoled.service qubesd.service qubes-qmemman.service
Before=redsparrow.service

[Service]
Type=oneshot
StandardOutput=syslog
RemainAfterExit=yes
# Needed to avoid rebooting before all VMs have shut down.
TimeoutStopSec=180
ExecStart=/usr/lib/qubes/startup-misc.sh
ExecStop=/home/ctor/bin/preshutdown
# QubesDB daemons stop after 60s timeout in worst case; speed it up, since no
# VMs are running now
ExecStop=-/usr/bin/pkill qubesdb-daemon
#ExecStop=/usr/bin/true

[Install]
WantedBy=multi-user.target
Also=qubes-meminfo-writer-dom0.service qubes-qmemman.service
Alias=qubes_core.service

redsparrow.service :

[Unit]
Description=Red Sparrow
After=qubes-db-dom0.service libvirtd.service xenconsoled.service qubesd.service qubes-qmemman.service qubes-core.service


PartOf=graphical.target
BindsTo=graphical.target

#note: after modifying this file, I'm doing: systemctl daemon-reload 

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/echo "Red Sparrow START"
ExecStop=-/usr/bin/echo "Red Sparrow stopping"
#ExecStop=-/usr/bin/ps axuw
#^ the same only 1 pid exists here: [  435.084168] ps[4348]: ctor      3706  0.0  0.1 160852  5820 ?        RLs  10:30   0:00 /usr/bin/qubes-guid -N sys-net -c 0xcc0000 -i /usr/share/icons/hicolor/128x128/devices/appvm-red.png -l 1 -q -d 1 -n
#ExecStop=-/usr/bin/sleep 2
#ExecStop=-/usr/bin/pkill qubes-guid
#ExecStop=-/usr/bin/sleep 2
#ExecStop=-/usr/bin/ps axuw
#^ the only thing that remains: [   80.569153] ps[4096]: ctor      3798  4.5  0.1 160856  5864 ?        RLs  10:28   0:02 /usr/bin/qubes-guid -N sys-net -c 0xcc0000 -i /usr/share/icons/hicolor/12
ExecStop=-/usr/bin/pkill -9 qubes-guid
#ExecStop=/home/ctor/bin/preshutdown
ExecStop=-/usr/bin/echo "Red Sparrow stopped"

[Install]
WantedBy=graphical.target

Ambos estão ativados:

$ systemctl  status qubes-core redsparrow
● qubes-core.service - Qubes Dom0 startup setup
   Loaded: loaded (/usr/lib/systemd/system/qubes-core.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2018-09-11 20:29:02 CEST; 33min ago
  Process: 1957 ExecStart=/usr/lib/qubes/startup-misc.sh (code=exited, status=0/SUCCESS)
 Main PID: 1957 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/qubes-core.service

● redsparrow.service - Red Sparrow
   Loaded: loaded (/usr/lib/systemd/system/redsparrow.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2018-09-11 20:29:02 CEST; 33min ago
  Process: 2027 ExecStart=/usr/bin/echo Red Sparrow START (code=exited, status=0/SUCCESS)
 Main PID: 2027 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/redsparrow.service

Sep 11 20:29:02 dom0 echo[2027]: Red Sparrow START

Agora, no xfceMenu-> Logout- > Reiniciar (ou Desligar), o ExecStop do redsparrow irá pkill -9 qubes-guid , o que permite que o ExecStop de qubes-core.service seja executado, bem aqui é o log:

[  200.591802] systemd-logind[2023]: System is rebooting.
[  200.595924] systemd[1]: Removed slice system-getty.slice.
[  200.596665] systemd[1]: Stopping Session 2 of user ctor.
[  200.596682] systemd[1]: Stopped target Sound Card.
[  200.596813] systemd[1]: Stopping Authorization Manager...
[  200.599840] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=2053975142.4, total_available_memory=29606214252.600002)
[  200.600535] qmemman.daemon.algo[2026]: left_memory=1809448348 acceptors_count=1
[  200.600732] systemd[1]: Stopping Manage Sound Card State (restore and store)...
[  200.601406] systemd[1]: Stopping RealtimeKit Scheduling Policy Service...
[  200.607158] alsactl[2021]: alsactl daemon stopped
[  200.615006] systemd[1]: Stopping User Manager for UID 1000...
[  200.616624] systemd[1]: Stopping Restore /run/initramfs on shutdown...
[  200.616918] systemd[1]: Stopping Accounts Service...
[  200.620666] systemd[1]: Stopped target Graphical Interface.
[  200.620734] systemd[1]: Stopped target Multi-User System.
[  200.620764] systemd[1]: Stopped target Login Prompts.
[  200.621105] systemd[1]: Stopped xen-init-dom0, initialise Dom0 configuration (xenstore nodes, JSON configuration stub).
[  200.621156] systemd[1]: Stopped target Timers.
[  200.621183] systemd[1]: Stopped Discard unused blocks once a week.
[  200.621201] systemd[1]: Stopped Daily Cleanup of Temporary Directories.
[  200.622702] systemd[1]: Stopping Red Sparrow...
[  200.622928] systemd[1]: Stopping Light Display Manager...
[  200.623165] systemd[1]: Stopping Command Scheduler...
[  200.624182] systemd[1]: Stopping Daemon for power management...
[  200.624282] systemd[1]: Stopping Serial Getty on hvc0...
[  200.625237] echo[4138]: Red Sparrow stopping
[  200.625703] crond[3466]: (CRON) INFO (Shutting down)
[  200.627458] lightdm[3468]: Error opening audit socket: Protocol not supported
[  200.636590] qubes.WindowIconUpdater-sys-net[3839]: Traceback (most recent call last):
[  200.638285] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 557, in wrapper
[  200.638607] qubes.WindowIconUpdater-sys-net[3839]:     return f(*args)
[  200.638857] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 596, in poll_for_event
[  200.639190] qubes.WindowIconUpdater-sys-net[3839]:     self.invalid()
[  200.639438] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 545, in invalid
[  200.639689] qubes.WindowIconUpdater-sys-net[3839]:     raise ConnectionException(err)
[  200.639929] qubes.WindowIconUpdater-sys-net[3839]: xcffib.ConnectionException: xcb connection errors because of socket, pipe and other stream errors.
[  200.640319] qubes.WindowIconUpdater-sys-net[3839]: 
[  200.640618] qubes.WindowIconUpdater-sys-net[3839]: During handling of the above exception, another exception occurred:
[  200.640889] qubes.WindowIconUpdater-sys-net[3839]: 
[  200.641333] qubes.WindowIconUpdater-sys-net[3839]: Traceback (most recent call last):
[  200.641586] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/qubes/icon-receiver", line 362, in <module>
[  200.641837] qubes.WindowIconUpdater-sys-net[3839]:     rcvd.handle_input()
[  200.642168] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/qubes/icon-receiver", line 347, in handle_input
[  200.642438] qubes.WindowIconUpdater-sys-net[3839]:     self.handle_events()
[  200.642694] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/qubes/icon-receiver", line 259, in handle_events
[  200.643110] qubes.WindowIconUpdater-sys-net[3839]:     for ev in iter(self.conn.poll_for_event, None):
[  200.643359] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 559, in wrapper
[  200.643601] qubes.WindowIconUpdater-sys-net[3839]:     self.invalid()
[  200.643838] qubes.WindowIconUpdater-sys-net[3839]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 545, in invalid
[  200.644193] qubes.WindowIconUpdater-sys-net[3839]:     raise ConnectionException(err)
[  200.644434] qubes.WindowIconUpdater-sys-net[3839]: xcffib.ConnectionException: xcb connection errors because of socket, pipe and other stream errors.
[  200.644677] qubes.WindowIconUpdater-sys-firewall[3864]: Traceback (most recent call last):
[  200.644965] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 557, in wrapper
[  200.645342] qubes.WindowIconUpdater-sys-firewall[3864]:     return f(*args)
[  200.645600] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 596, in poll_for_event
[  200.645849] qubes.WindowIconUpdater-sys-firewall[3864]:     self.invalid()
[  200.646243] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 545, in invalid
[  200.646531] qubes.WindowIconUpdater-sys-firewall[3864]:     raise ConnectionException(err)
[  200.646797] qubes.WindowIconUpdater-sys-firewall[3864]: xcffib.ConnectionException: xcb connection errors because of socket, pipe and other stream errors.
[  200.647162] qubes.WindowIconUpdater-sys-firewall[3864]: 
[  200.647424] qubes.WindowIconUpdater-sys-firewall[3864]: During handling of the above exception, another exception occurred:
[  200.647674] qubes.WindowIconUpdater-sys-firewall[3864]: 
[  200.647916] qubes.WindowIconUpdater-sys-firewall[3864]: Traceback (most recent call last):
[  200.648311] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/qubes/icon-receiver", line 362, in <module>
[  200.648558] qubes.WindowIconUpdater-sys-firewall[3864]:     rcvd.handle_input()
[  200.648807] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/qubes/icon-receiver", line 347, in handle_input
[  200.649139] qubes.WindowIconUpdater-sys-firewall[3864]:     self.handle_events()
[  200.649384] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/qubes/icon-receiver", line 259, in handle_events
[  200.649629] qubes.WindowIconUpdater-sys-firewall[3864]:     for ev in iter(self.conn.poll_for_event, None):
[  200.649878] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 559, in wrapper
[  200.650235] qubes.WindowIconUpdater-sys-firewall[3864]:     self.invalid()
[  200.650479] qubes.WindowIconUpdater-sys-firewall[3864]:   File "/usr/lib/python3.5/site-packages/xcffib/__init__.py", line 545, in invalid
[  200.650727] qubes.WindowIconUpdater-sys-firewall[3864]:     raise ConnectionException(err)
[  200.651978] qubes.WindowIconUpdater-sys-firewall[3864]: xcffib.ConnectionException: xcb connection errors because of socket, pipe and other stream errors.
[  200.652910] systemd[1]: Stopped Daemon for power management.
[  200.653152] systemd[1]: Stopped Command Scheduler.
[  200.653407] systemd[1]: Stopped Accounts Service.
[  200.653646] systemd[1]: Stopped Authorization Manager.
[  200.654222] systemd[1]: Stopped Serial Getty on hvc0.
[  200.654727] systemd[1]: Stopped RealtimeKit Scheduling Policy Service.
[  200.655287] systemd[1]: Stopped Manage Sound Card State (restore and store).
[  200.663540] systemd[1]: Removed slice system-serial\x2dgetty.slice.
[  200.672584] echo[4157]: Red Sparrow stopped
[  200.674556] systemd[1]: Stopped Red Sparrow.
[  200.699263] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=1527043584.0, total_available_memory=30133145811.0)
[  200.699587] qmemman.daemon.algo[2026]: left_memory=3881142054 acceptors_count=1
[  200.799498] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=1483161907.2, total_available_memory=30177027487.8)
[  200.799822] qmemman.daemon.algo[2026]: left_memory=4120066821 acceptors_count=1
[  200.826422] systemd[3567]: Received SIGRTMIN+24 from PID 4162 (kill).
[  200.828412] systemd[1]: Stopped User Manager for UID 1000.
[  200.865974] systemd[1]: Stopped Session 2 of user ctor.
[  200.866413] systemd[1]: Removed slice User Slice of ctor.
[  200.866537] systemd[1]: Stopping Login Service...
[  200.899754] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=1254328627.2, total_available_memory=30405860767.8)
[  200.901140] systemd[1]: Stopped Light Display Manager.
[  200.903464] qmemman.daemon.algo[2026]: left_memory=5636897981 acceptors_count=1
[  200.904710] systemd[1]: Starting Show Plymouth Reboot Screen...
[  200.907987] systemd[1]: Stopping Permit User Sessions...
[  200.914738] systemd[1]: Stopped Permit User Sessions.
[  200.918868] systemd[1]: Stopped Start Qubes VM sys-firewall.
[  200.920535] systemd[1]: Stopped Start Qubes VM sys-net.
[  200.922285] systemd[1]: Stopping Qubes memory information reporter...
[  200.924197] systemd[1]: Removed slice system-qubes\x2dvm.slice.
[  200.926371] systemd[1]: Stopped Qubes memory information reporter.
[  200.928154] systemd[1]: Started Show Plymouth Reboot Screen.
[  200.931635] systemd[1]: Stopping Qubes Dom0 startup setup...
[  200.933900] systemd[1]: Stopped Login Service.
[  200.936259] systemd[1]: Stopped target User and Group Name Lookups.
[  200.939732] preshutdown[4180]: [42mStarting /home/ctor/bin/preshutdown Tue Sep 11 11:23:01 CEST 2018 200.93 1122.69
[  200.947411] preshutdown[4180]: (B[mName                                        ID   Mem VCPUs  State   Time(s)
[  200.949391] preshutdown[4180]: Domain-0                                     0  4091     6     r-----      87.1
[  200.951247] preshutdown[4180]: sys-net                                      1  1584     2     -b----       7.6
[  200.953088] preshutdown[4180]: sys-net-dm                                   2   144     1     -b----      28.0
[  200.954922] preshutdown[4180]: sys-firewall                                 3  3983     2     -b----      11.8
[  201.016675] preshutdown[4180]: 2018-09-11 11:23:01,357 [MainProcess selector_events.__init__:65] asyncio: Using selector: EpollSelector
[  201.028315] qubesd[2019]: socket.send() raised exception.
[  201.030663] qubesd[2019]: socket.send() raised exception.
[  201.032457] qubesd[2019]: socket.send() raised exception.
[  201.034216] qubesd[2019]: socket.send() raised exception.
[  201.035962] qubesd[2019]: socket.send() raised exception.
[  201.064381] systemd[1]: Stopped Restore /run/initramfs on shutdown.
[  201.225628] qubesd[2019]: socket.send() raised exception.
[  201.227562] qubesd[2019]: socket.send() raised exception.
[  201.229354] qubesd[2019]: socket.send() raised exception.
[  201.231028] qubesd[2019]: socket.send() raised exception.
[  201.232667] qubesd[2019]: socket.send() raised exception.
[  201.234316] qubesd[2019]: socket.send() raised exception.
[  201.235938] qubesd[2019]: socket.send() raised exception.
[  201.237634] qubesd[2019]: socket.send() raised exception.
[  201.239364] qubesd[2019]: socket.send() raised exception.
[  201.241047] qubesd[2019]: socket.send() raised exception.
[  201.242875] qubesd[2019]: socket.send() raised exception.
[  201.244541] qubesd[2019]: socket.send() raised exception.
[  201.246804] qubesd[2019]: socket.send() raised exception.
[  201.248443] qubesd[2019]: socket.send() raised exception.
[  201.250096] qubesd[2019]: socket.send() raised exception.
[  201.251738] qubesd[2019]: socket.send() raised exception.
[  201.253419] qubesd[2019]: socket.send() raised exception.
[  201.255032] qubesd[2019]: socket.send() raised exception.
[  201.256755] qubesd[2019]: socket.send() raised exception.
[  201.258889] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=1213940019.2, total_available_memory=30446249375.8)
[  201.260626] qmemman.daemon.algo[2026]: left_memory=4911686116 acceptors_count=1
[  201.731202] qubesd[2019]: socket.send() raised exception.
[  201.740268] qubesd[2019]: socket.send() raised exception.
[  201.745647] qubesd[2019]: socket.send() raised exception.
[  201.747226] qubesd[2019]: socket.send() raised exception.
[  201.748701] qubesd[2019]: socket.send() raised exception.
[  201.750275] qubesd[2019]: socket.send() raised exception.
[  201.751786] qubesd[2019]: socket.send() raised exception.
[  201.753275] qubesd[2019]: socket.send() raised exception.
[  201.754712] qubesd[2019]: socket.send() raised exception.
[  201.756190] qubesd[2019]: socket.send() raised exception.
[  201.757710] qubesd[2019]: socket.send() raised exception.
[  201.759242] qubesd[2019]: socket.send() raised exception.
[  201.760601] qubesd[2019]: socket.send() raised exception.
[  201.762002] qubesd[2019]: socket.send() raised exception.
[  201.763410] qubesd[2019]: socket.send() raised exception.
[  201.764778] qubesd[2019]: socket.send() raised exception.
[  201.766153] qubesd[2019]: socket.send() raised exception.
[  201.767548] qubesd[2019]: socket.send() raised exception.
[  201.768842] qubesd[2019]: socket.send() raised exception.
[  202.692233] usb 1-13: USB disconnect, device number 3
[  202.938070] usb 1-13: new low-speed USB device number 5 using xhci_hcd
[  203.076940] usb 1-13: New USB device found, idVendor=1c4f, idProduct=0034, bcdDevice= 1.10
[  203.083241] usb 1-13: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[  203.089527] usb 1-13: Product: Usb Mouse
[  203.095764] usb 1-13: Manufacturer: SIGMACHIP
[  203.105476] input: SIGMACHIP Usb Mouse as /devices/pci0000:00/0000:00:14.0/usb1/1-13/1-13:1.0/0003:1C4F:0034.0006/input/input20
[  203.106823] hid-generic 0003:1C4F:0034.0006: input,hidraw3: USB HID v1.10 Mouse [SIGMACHIP Usb Mouse] on usb-0000:00:14.0-13/input0
[  205.915385] systemd[1]: Received SIGRTMIN+20 from PID 4172 (plymouthd).
[  209.068302] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=61089422, total_mem_pref=1213940019.2, total_available_memory=30446249375.8)
[  209.070311] qmemman.daemon.algo[2026]: left_memory=4911686116 acceptors_count=1
[  209.546380] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=4252705625, total_mem_pref=864441446.4, total_available_memory=30793060151.6)
[  209.546778] qmemman.systemstate[2026]: stat: dom '0' act=27404795973 pref=864441446.4 last_target=27404795973
[  209.547128] qmemman.systemstate[2026]: stat: xenfree=4305134425 memset_reqs=[('0', 31625844096)]
[  209.547400] qmemman.systemstate[2026]: mem-set domain 0 to 31625844096
[  209.840745] dmeventd[920]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
[  209.934055] lvm[920]: Monitoring thin pool qubes_dom0-pool00-tpool.
[  210.081859] dmeventd[920]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
[  210.210190] lvm[920]: Monitoring thin pool qubes_dom0-pool00-tpool.
[  213.325837] pciback 0000:00:1f.6: disabling bus mastering
[  213.349492] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=37638021, total_mem_pref=864441446.4, total_available_memory=30799040670.6)
[  213.455122] pciback 0000:00:1f.6: restoring config space at offset 0x10 (was 0x0, writing 0xdf200000)
[  213.455289] pciback 0000:00:1f.6: restoring config space at offset 0x4 (was 0x100000, writing 0x100002)
[  213.567179] pciback 0000:00:1f.6: restoring config space at offset 0x10 (was 0x0, writing 0xdf200000)
[  213.567346] pciback 0000:00:1f.6: restoring config space at offset 0x4 (was 0x100000, writing 0x100002)
[  213.774489] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=187661981, total_mem_pref=864441446.4, total_available_memory=30949064630.6)
[  213.774891] qmemman.systemstate[2026]: stat: dom '0' act=31625844096 pref=864441446.4 last_target=31625844096
[  213.775249] qmemman.systemstate[2026]: stat: xenfree=240090781 memset_reqs=[('0', 31781692570)]
...
[  213.946074] qmemman.daemon.algo[2026]: balance_when_enough_memory(xen_free_memory=1695245711, total_mem_pref=864441446.4, total_available_memory=32612496834.6)
[  213.946439] qmemman.systemstate[2026]: stat: dom '0' act=31781692570 pref=864441446.4 last_target=31781692570
[  213.946719] qmemman.systemstate[2026]: stat: xenfree=1747674511 memset_reqs=[('0', 33443461342)]
[  213.947077] qmemman.systemstate[2026]: mem-set domain 0 to 33443461342
[  214.347377] dmeventd[920]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
[  214.551177] lvm[920]: Monitoring thin pool qubes_dom0-pool00-tpool.
[  214.679097] dmeventd[920]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
[  214.770524] lvm[920]: Monitoring thin pool qubes_dom0-pool00-tpool.
[  215.185692] preshutdown[4180]: real  0m14.238s
[  215.186110] preshutdown[4180]: user  0m0.066s
[  215.186406] preshutdown[4180]: sys   0m0.018s
[  215.186673] preshutdown[4180]: exitcode: '0'
[  215.191118] preshutdown[4180]: real  0m0.005s
[  215.191443] preshutdown[4180]: user  0m0.000s
[  215.191718] preshutdown[4180]: sys   0m0.007s
[  215.196536] preshutdown[4180]: [41mEnding /home/ctor/bin/preshutdown on Tue Sep 11 11:23:15 CEST 2018 215.19 1202.38
[  215.198502] preshutdown[4180]: (B[m
[  215.206933] qubesdb-daemon[2020]: terminating
[  215.207692] systemd[1]: Stopped Qubes Dom0 startup setup.
[  215.208143] systemd[1]: Stopped Qubes DB agent.
[  215.209235] systemd[1]: Stopping Qubes memory management daemon...
[  215.209358] systemd[1]: Stopping Xenconsoled - handles logging from guest consoles and hypervisor...
[  215.209485] systemd[1]: Stopping Virtualization daemon...
[  215.209585] systemd[1]: Stopping Qubes OS daemon...
[  215.213172] qubesd[2019]: caught SIGTERM, exiting
[  215.213861] systemd[1]: Stopped Qubes memory management daemon.
[  215.215044] systemd[1]: Stopped Xenconsoled - handles logging from guest consoles and hypervisor.
[  215.218932] libvirtd[2046]: S3 disabled
[  215.219387] libvirtd[2046]: S4 disabled
[  215.221252] systemd[1]: Stopped Virtualization daemon.
[  215.221362] systemd[1]: Stopped target Remote File Systems.
[  215.221496] systemd[1]: Stopping D-Bus System Message Bus...
[  215.221626] systemd[1]: Stopping The Xen xenstore...
[  215.222350] systemd[1]: Stopped The Xen xenstore.
[  215.226248] systemd[1]: Stopped D-Bus System Message Bus.
[  215.253063] systemd[1]: Stopped Qubes OS daemon.
[  215.254560] systemd[1]: Stopping LVM2 PV scan on device 253:0...
[  215.254639] systemd[1]: Stopped target Basic System.
[  215.254766] systemd[1]: Stopped target Sockets.
[  215.254861] systemd[1]: Closed Virtual machine lock manager socket.
[  215.254952] systemd[1]: Closed D-Bus System Message Bus Socket.
[  215.255007] systemd[1]: Closed Virtual machine log manager socket.
[  215.255060] systemd[1]: Stopped target Slices.
[  215.255266] systemd[1]: Removed slice User and Session Slice.
[  215.255339] systemd[1]: Stopped target System Initialization.
[  215.256645] systemd[1]: Stopping Load/Save Random Seed...
[  215.257975] systemd[1]: Stopping Update UTMP about System Boot/Shutdown...
[  215.258418] systemd[1]: Stopped Setup Virtual Console.
[  215.258801] systemd[1]: Stopped Load legacy module configuration.
[  215.258889] systemd[1]: Stopped target Encrypted Volumes.
[  215.260343] systemd[1]: Stopping Cryptography Setup for luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b...
[  215.260455] systemd[1]: Stopped Forward Password Requests to Plymouth Directory Watch.
[  215.260556] systemd[1]: Stopped target Paths.
[  215.264978] systemd[1]: Stopped Forward Password Requests to Wall Directory Watch.
[  215.266212] systemd[1]: Stopped Load/Save Random Seed.
[  215.268034] systemd[1]: Stopped Update UTMP about System Boot/Shutdown.
[  215.268401] systemd[1]: Stopped Create Volatile Files and Directories.
[  215.268447] systemd[1]: Stopped target Local File Systems.
[  215.269611] systemd[1]: Unmounting Temporary Directory...
[  215.270813] systemd[1]: Unmounting mount xenstore file system...
[  215.272169] systemd[1]: Unmounting /boot/efi...
[  215.273388] systemd[1]: Unmounting /run/user/1000...
[  215.274538] systemd[1]: Stopped Configure read-only root support.
[  215.277805] systemd[1]: Stopped LVM2 PV scan on device 253:0.
[  215.278047] systemd[1]: Unmounted Temporary Directory.
[  215.278274] systemd[1]: Stopped target Swap.
[  215.278431] systemd[1]: Removed slice system-lvm2\x2dpvscan.slice.
[  215.280718] systemd[1]: Unmounted mount xenstore file system.
[  215.282456] systemd[1]: Unmounted /run/user/1000.
[  215.286594] systemd[1]: Unmounting Mount /proc/xen files...
[  215.291600] systemd[1]: Unmounted /boot/efi.
[  215.293002] systemd[1]: Stopped File System Check on /dev/disk/by-uuid/181A-D5EF.
[  215.297142] systemd[1]: Removed slice system-systemd\x2dfsck.slice.
[  215.301093] systemd[1]: Stopped target Local File Systems (Pre).
[  215.302668] systemd[1]: Stopped Remount Root and Kernel File Systems.
[  215.307508] systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
[  215.310823] systemd[1]: Stopped Create Static Device Nodes in /dev.
[  215.314899] systemd[1]: Unmounted Mount /proc/xen files.
[  215.330404] dmeventd[920]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
[  215.343572] lvm[5225]:   48 logical volume(s) in volume group "qubes_dom0" unmonitored
[  215.344870] systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
[  215.346348] systemd[1]: Stopping LVM2 metadata daemon...
[  215.346491] lvmetad[931]: Failed to accept connection errno 11.
[  215.350566] systemd[1]: Stopped LVM2 metadata daemon.
[  220.082279] systemd-cryptsetup[5204]: Failed to deactivate: Device or resource busy
[  220.086230] systemd[1]: systemd-cryptsetup@luks\x2d9ed952b5\x2d2aa8\x2d4564\x2db700\x2dfb23f5c9e94b.service: Control process exited, code=exited status=1
[  220.088102] systemd[1]: Stopped Cryptography Setup for luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b.
[  220.089553] systemd[1]: systemd-cryptsetup@luks\x2d9ed952b5\x2d2aa8\x2d4564\x2db700\x2dfb23f5c9e94b.service: Unit entered failed state.
[  220.090953] systemd[1]: systemd-cryptsetup@luks\x2d9ed952b5\x2d2aa8\x2d4564\x2db700\x2dfb23f5c9e94b.service: Failed with result 'exit-code'.
[  220.091112] systemd[1]: Reached target Unmount All Filesystems.
[  220.095181] systemd[1]: Removed slice system-systemd\x2dcryptsetup.slice.
[  220.097862] systemd[1]: Reached target Shutdown.
[  220.099211] systemd[1]: Reached target Final Step.
[  220.100552] systemd[1]: Starting Reboot...
[  220.105370] systemd[1]: Shutting down.

(na verdade, não posso colá-lo aqui totalmente devido a Body is limited to 30000 characters; you entered 35634. , mas a essência dele está presente acima)

    
por 11.09.2018 / 21:12