Eu passei um bom tempo tentando rastrear isso. Meu desafio específico: um servidor Postgres e quero usar o ZFS para seu volume de dados. A linha de base é XFS.
Em primeiro lugar, minhas tentativas me dizem que ashift=12
está errado. Se houver alguma mágica ashift
número não é 12. Estou usando 0
e estou obtendo resultados muito bons.
Eu também experimentei várias opções do zfs e as que me deram os resultados abaixo são:
atime=off
- não preciso de tempos de acesso
checksum=off
- estou dividindo, não espelhando
compression=lz4
- O desempenho é melhor com compactação (troca de cpu?)
exec=off
- Isto é para dados, não executáveis
logbias=throughput
- Leia nas interwebs isso é melhor para o Postgres
recordsize=8k
- Blocos 8k específicos da PG
sync=standard
- tentou desativar a sincronização; não viu muito benefício
Meus testes abaixo mostram melhor do que o desempenho do XFS (por favor, comente se você vir erros nos meus testes!).
Com isso, meu próximo passo é tentar o Postgres rodando em um sistema de arquivos 2 x EBS ZFS.
Minha configuração específica:
EC2: m4.xlarge
instance
EBS: 250 GB gp2
volumes
kernel: Linux [...] 3.13.0-105-genérico # 152-Ubuntu SMP [...] x86_64 x86_64 x86_64 GNU / Linux *
Primeiro, eu queria testar o desempenho bruto do EBS. Usando uma variação do comando fio
acima, eu inventei o encantamento abaixo. Nota: Estou usando blocos de 8k porque é isso que eu li as gravações do PostgreSQL são:
ubuntu@ip-172-31-30-233:~$ device=/dev/xvdbd; sudo dd if=/dev/zero of=${device} bs=1M count=100 && sudo fio --name randwrite --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --size=400G --numjobs=4 --runtime=60 --group_reporting --fallocate=none --filename=${device}
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.250631 s, 418 MB/s
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
...
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
fio-2.1.3
Starting 4 processes
Jobs: 4 (f=4): [wwww] [100.0% done] [0KB/13552KB/0KB /s] [0/1694/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=4): err= 0: pid=18109: Tue Feb 14 19:13:53 2017
write: io=3192.2MB, bw=54184KB/s, iops=6773, runt= 60327msec
slat (usec): min=2, max=805209, avg=585.73, stdev=6238.19
clat (usec): min=4, max=805236, avg=1763.29, stdev=10716.41
lat (usec): min=15, max=805241, avg=2349.30, stdev=12321.43
clat percentiles (usec):
| 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 19],
| 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26],
| 70.00th=[ 27], 80.00th=[ 29], 90.00th=[ 36], 95.00th=[15808],
| 99.00th=[31872], 99.50th=[35584], 99.90th=[99840], 99.95th=[199680],
| 99.99th=[399360]
bw (KB /s): min= 156, max=1025440, per=26.00%, avg=14088.05, stdev=67584.25
lat (usec) : 10=0.01%, 20=20.53%, 50=72.20%, 100=0.86%, 250=0.17%
lat (usec) : 500=0.13%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.59%, 20=2.01%, 50=3.29%
lat (msec) : 100=0.11%, 250=0.05%, 500=0.02%, 750=0.01%, 1000=0.01%
cpu : usr=0.22%, sys=1.34%, ctx=9832, majf=0, minf=114
IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=408595/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=3192.2MB, aggrb=54184KB/s, minb=54184KB/s, maxb=54184KB/s, mint=60327msec, maxt=60327msec
Disk stats (read/write):
xvdbd: ios=170/187241, merge=0/190688, ticks=180/8586692, in_queue=8590296, util=99.51%
O desempenho bruto do volume do EBS é WRITE: io=3192.2MB
.
Agora, testando o XFS com o mesmo comando fio
:
Jobs: 4 (f=4): [wwww] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=4): err= 0: pid=17441: Tue Feb 14 19:10:27 2017
write: io=3181.9MB, bw=54282KB/s, iops=6785, runt= 60024msec
slat (usec): min=3, max=21077K, avg=587.19, stdev=76081.88
clat (usec): min=4, max=21077K, avg=1768.72, stdev=131857.04
lat (usec): min=23, max=21077K, avg=2356.23, stdev=152444.62
clat percentiles (usec):
| 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52],
| 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69],
| 70.00th=[ 79], 80.00th=[ 99], 90.00th=[ 137], 95.00th=[ 274],
| 99.00th=[17024], 99.50th=[25472], 99.90th=[70144], 99.95th=[120320],
| 99.99th=[1564672]
bw (KB /s): min= 2, max=239872, per=66.72%, avg=36217.04, stdev=51480.84
lat (usec) : 10=0.01%, 20=0.03%, 50=15.58%, 100=64.51%, 250=14.55%
lat (usec) : 500=1.36%, 750=0.33%, 1000=0.25%
lat (msec) : 2=0.68%, 4=0.67%, 10=0.71%, 20=0.58%, 50=0.59%
lat (msec) : 100=0.10%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%, >=2000=0.01%
cpu : usr=0.43%, sys=4.81%, ctx=269518, majf=0, minf=110
IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=407278/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=3181.9MB, aggrb=54282KB/s, minb=54282KB/s, maxb=54282KB/s, mint=60024msec, maxt=60024msec
Disk stats (read/write):
xvdbd: ios=4/50983, merge=0/319694, ticks=0/2067760, in_queue=2069888, util=26.21%
Nossa linha de base é WRITE: io=3181.9MB
; muito perto da velocidade bruta do disco.
Agora, no ZFS com WRITE: io=3181.9MB
como referência:
ubuntu@ip-172-31-30-233:~$ sudo zpool create testpool xvdbd -f && (for option in atime=off checksum=off compression=lz4 exec=off logbias=throughput recordsize=8k sync=standard; do sudo zfs set $option testpool; done;) && sudo fio --name randwrite --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --size=400G --numjobs=4 --runtime=60 --group_reporting --fallocate=none --filename=/testpool/testfile; sudo zpool destroy testpool
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
...
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
fio-2.1.3
Starting 4 processes
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
Jobs: 4 (f=4): [wwww] [100.0% done] [0KB/41328KB/0KB /s] [0/5166/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=4): err= 0: pid=18923: Tue Feb 14 19:17:18 2017
write: io=4191.7MB, bw=71536KB/s, iops=8941, runt= 60001msec
slat (usec): min=10, max=1399.9K, avg=442.26, stdev=4482.85
clat (usec): min=2, max=1400.4K, avg=1343.38, stdev=7805.37
lat (usec): min=56, max=1400.4K, avg=1786.61, stdev=9044.27
clat percentiles (usec):
| 1.00th=[ 62], 5.00th=[ 75], 10.00th=[ 87], 20.00th=[ 108],
| 30.00th=[ 122], 40.00th=[ 167], 50.00th=[ 620], 60.00th=[ 1176],
| 70.00th=[ 1496], 80.00th=[ 2320], 90.00th=[ 2992], 95.00th=[ 4128],
| 99.00th=[ 6816], 99.50th=[ 9536], 99.90th=[30592], 99.95th=[66048],
| 99.99th=[185344]
bw (KB /s): min= 2332, max=82848, per=25.46%, avg=18211.64, stdev=15010.61
lat (usec) : 4=0.01%, 50=0.09%, 100=14.60%, 250=26.77%, 500=5.96%
lat (usec) : 750=5.27%, 1000=4.24%
lat (msec) : 2=20.96%, 4=16.74%, 10=4.93%, 20=0.30%, 50=0.08%
lat (msec) : 100=0.04%, 250=0.03%, 500=0.01%, 2000=0.01%
cpu : usr=0.61%, sys=9.48%, ctx=177901, majf=0, minf=107
IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=536527/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=4191.7MB, aggrb=71535KB/s, minb=71535KB/s, maxb=71535KB/s, mint=60001msec, maxt=60001msec
Observe que isso foi melhor do que o XFS WRITE: io=4191.7MB
. Tenho certeza que isso é devido à compressão.
Para os pontapés, adicionarei um segundo volume:
ubuntu@ip-172-31-30-233:~$ sudo zpool create testpool xvdb{c,d} -f && (for option in atime=off checksum=off compression=lz4 exec=off logbias=throughput recordsize=8k sync=standard; do sudo zfs set $option testpool; done;) && sudo fio --name randwrite --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --size=400G --numjobs=4 --runtime=60 --group_reporting --fallocate=none --filename=/testpool/testfile; sudo zpool destroy testpool
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
...
randwrite: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=4
fio-2.1.3
Starting 4 processes
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
randwrite: Laying out IO file(s) (1 file(s) / 409600MB)
Jobs: 4 (f=4): [wwww] [100.0% done] [0KB/71936KB/0KB /s] [0/8992/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=4): err= 0: pid=20901: Tue Feb 14 19:23:30 2017
write: io=5975.9MB, bw=101983KB/s, iops=12747, runt= 60003msec
slat (usec): min=10, max=1831.2K, avg=308.61, stdev=4419.95
clat (usec): min=3, max=1831.6K, avg=942.64, stdev=7696.18
lat (usec): min=58, max=1831.8K, avg=1252.25, stdev=8896.67
clat percentiles (usec):
| 1.00th=[ 70], 5.00th=[ 92], 10.00th=[ 106], 20.00th=[ 129],
| 30.00th=[ 386], 40.00th=[ 490], 50.00th=[ 692], 60.00th=[ 796],
| 70.00th=[ 932], 80.00th=[ 1160], 90.00th=[ 1624], 95.00th=[ 2256],
| 99.00th=[ 5344], 99.50th=[ 8512], 99.90th=[30592], 99.95th=[60672],
| 99.99th=[117248]
bw (KB /s): min= 52, max=112576, per=25.61%, avg=26116.98, stdev=15313.32
lat (usec) : 4=0.01%, 10=0.01%, 50=0.04%, 100=7.17%, 250=19.04%
lat (usec) : 500=14.36%, 750=15.36%, 1000=17.41%
lat (msec) : 2=20.28%, 4=4.82%, 10=1.13%, 20=0.25%, 50=0.08%
lat (msec) : 100=0.04%, 250=0.02%, 2000=0.01%
cpu : usr=1.05%, sys=15.14%, ctx=396649, majf=0, minf=103
IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=764909/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=5975.9MB, aggrb=101982KB/s, minb=101982KB/s, maxb=101982KB/s, mint=60003msec, maxt=60003msec
Com um segundo volume, WRITE: io=5975.9MB
, então ~ 1.8X as gravações.
Um terceiro volume nos dá WRITE: io=6667.5MB
, então ~ 2.1X as gravações.
E um quarto volume nos dá WRITE: io=6552.9MB
. Para este tipo de instância, parece que quase limito a rede EBS com dois volumes, definitivamente com três e não é melhor com 4 (750 * 3 = 2250 IOPS).
* De este vídeo , certifique-se de usar o kernel do Linux 3.8+ para obter toda a bondade do EBS.