Duplicity - Falha no backup do S3 com “[Errno 105] Sem espaço no buffer disponível”

1

Eu tenho um script noturno no meu Ubuntu 10.04 VPS que usa a duplicidade (0.6.24) para executar backups incrementais e criptografados na Amazon S3 Este script funcionou até um mês atrás, quando começou a falhar com erros como os seguintes:

Upload 's3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg' failed (attempt #5, reason: error: [Errno 105] No buffer space available)
Giving up trying to upload s3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg after 5 attempts
Backend error detail: Traceback (most recent call last):
  File "/usr/local/bin/duplicity", line 1502, in <module>
    with_tempdir(main)
  File "/usr/local/bin/duplicity", line 1496, in with_tempdir
    fn()
  File "/usr/local/bin/duplicity", line 1345, in main
    do_backup(action)
  File "/usr/local/bin/duplicity", line 1466, in do_backup
    full_backup(col_stats)
  File "/usr/local/bin/duplicity", line 538, in full_backup
    globals.backend)
  File "/usr/local/bin/duplicity", line 420, in write_multivol
    (tdp, dest_filename, vol_num)))
  File "/usr/local/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 145, in schedule_task
    return self.__run_synchronously(fn, params)
  File "/usr/local/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 171, in __run_synchronously
    ret = fn(*params)
  File "/usr/local/bin/duplicity", line 419, in <lambda>
    async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num),
  File "/usr/local/bin/duplicity", line 310, in put
    backend.put(tdp, dest_filename)
  File "/usr/local/lib/python2.6/dist-packages/duplicity/backends/_boto_single.py", line 266, in put
    raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
BackendException: Error uploading s3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg

Parece que é possível fazer upload de vários volumes de duplicidade antes que o erro ocorra e, se eu executar o script de backup novamente, ele será retomado de onde parou, para que eu possa concluir o backup, mas ele passa pelos 30 volumes.

O comando de duplicidade que estou usando é:

duplicity --full-if-older-than 1M \
      --encrypt-key={KEY} \
      --sign-key={KEY} \
      --exclude={PATH}
      {PATH} \
      s3://s3.amazonaws.com/{BUCKET} -v8

Como posso evitar esse erro?

    
por Greg 20.05.2014 / 01:14

0 respostas