Por que o ZFS reduz a velocidade de criação de arquivos grandes depois de 2.4 gig?

1

Zfs no linux, eu estou fazendo um arquivo tar grande, estou assistindo ele adicionar ao arquivo, e ele vai muito rápido, até atingir 2.4gig de tamanho, então ele apenas rastreia, por horas.

O mesmo arquivo no ext4 não tem problema. Alguém tem alguma ideia de por que isso pode ser?

O sistema de arquivos zfs está em um vdev 1tb espelhado. Muito espaço.

zpool list z
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
z      928G   463G   465G         -    28%    49%  1.00x  ONLINE  -

Editar: Adicionando resultados de get all e iostat ...

    root@io:~# zfs get all z
    NAME  PROPERTY              VALUE                  SOURCE
    z     type                  filesystem             -
    z     creation              Sat Jul 25  8:29 2015  -
    z     used                  472G                   -
    z     available             427G                   -
    z     referenced            19K                    -
    z     compressratio         1.10x                  -
    z     mounted               yes                    -
    z     quota                 none                   default
    z     reservation           none                   default
    z     recordsize            128K                   default
    z     mountpoint            /z                     default
    z     sharenfs              off                    default
    z     checksum              on                     default
    z     compression           lz4                    local
    z     atime                 on                     default
    z     devices               on                     default
    z     exec                  on                     default
    z     setuid                on                     default
    z     readonly              off                    default
    z     zoned                 off                    default
    z     snapdir               hidden                 default
    z     aclinherit            restricted             default
    z     canmount              on                     default
    z     xattr                 on                     default
    z     copies                1                      default
    z     version               5                      -
    z     utf8only              off                    -
    z     normalization         none                   -
    z     casesensitivity       sensitive              -
    z     vscan                 off                    default
    z     nbmand                off                    default
    z     sharesmb              off                    default
    z     refquota              none                   default
    z     refreservation        none                   default
    z     primarycache          all                    default
    z     secondarycache        all                    default
    z     usedbysnapshots       0                      -
    z     usedbydataset         19K                    -
    z     usedbychildren        472G                   -
    z     usedbyrefreservation  0                      -
    z     logbias               latency                default
    z     dedup                 off                    default
    z     mlslabel              none                   default
    z     sync                  standard               default
    z     refcompressratio      1.00x                  -
    z     written               19K                    -
    z     logicalused           521G                   -
    z     logicalreferenced     9.50K                  -
    z     filesystem_limit      none                   default
    z     snapshot_limit        none                   default
    z     filesystem_count      none                   default
    z     snapshot_count        none                   default
    z     snapdev               hidden                 default
    z     acltype               off                    default
    z     context               none                   default
    z     fscontext             none                   default
    z     defcontext            none                   default
    z     rootcontext           none                   default
    z     relatime              off                    default
    z     redundant_metadata    all                    default
    z     overlay               off                    default




    when it's going fast.

    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               4.30    0.25    8.97    9.29    0.00   77.20

    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sda               0.00     0.00    0.00    0.50     0.00     6.00    24.00     0.00    0.00    0.00    0.00   0.00   0.00
    sdb               0.00     0.50    0.00    1.00     0.00     6.00    12.00     0.00    2.00    0.00    2.00   2.00   0.20
    sdc               0.00     0.00  108.50  199.50  3702.50 11780.25   100.54     3.78   12.35   23.93    6.06   2.69  82.80
    sdd               0.00     0.00  255.00  177.50  1930.50 10308.25    56.60     2.39    5.57    5.49    5.68   1.95  84.40

    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               4.00    0.13   10.98    2.98    0.00   81.92

    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
    sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
    sdc               0.00     0.00  401.50    0.00  1168.50     0.00     5.82     1.09    2.71    2.71    0.00   1.48  59.40
    sdd               0.00     0.00  443.50    0.00  9012.00     0.00    40.64     1.70    3.83    3.83    0.00   1.31  58.00


    after 2.1 gigs.

    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               2.41    0.00    3.99   15.76    0.00   77.85

    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sda               0.00     0.00    0.00    4.50     0.00    20.00     8.89     0.00    0.00    0.00    0.00   0.00   0.00
    sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
    sdc               1.00     0.00  173.00   51.00  8988.00  1645.75    94.94     3.31   14.79   17.47    5.73   4.09  91.60
    sdd               2.00     0.00  357.50   36.50 21646.00   818.75   114.03     3.90   10.39   11.14    3.01   2.19  86.40

    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               3.17    0.00    2.09   10.71    0.00   84.03

    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
    sda               0.00     7.50    0.00    1.50     0.00    40.00    53.33     0.00    2.67    0.00    2.67   2.67   0.40
    sdb               0.00    28.00    0.50   13.00     2.00   162.00    24.30     0.03    1.93    0.00    2.00   1.93   2.60
    sdc               2.00     0.00  360.00    0.00 22623.50     0.00   125.69     5.33   14.65   14.65    0.00   2.71  97.60
    sdd               0.00     0.00  163.50    0.00  7950.25     0.00    97.25     3.46   21.17   21.17    0.00   5.83  95.40

Portanto, há muito mais leitura quando está a abrandar. Alguma ideia do porquê? Tar deveria estar escrevendo, eu acho.

Obrigado.

    
por Stu 18.10.2015 / 01:53

0 respostas