What Joris states can be correct. The version of swift that is running is quite vintage with 'customizations'. Varnish is however not used for authenticated requests (boss-backup, thus duplicity), that is bypassed directly to the swift proxy servers.
I've got no control over swift or the backend, since that's a different team. I do however have around 5% of over 8000 duplicity backups reporting this issue every day.
It would make duplicity more resillient, since both a retry of the version check, or a retry of the upload (as specified with numretries) would solve the issue. This backend might be wonky sometimes, and yes that should be fixed, but so might other backends. (https://aws.amazon.com/s3/faqs/ Amazon S3 Standard and Standard - IA are designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.).
What Joris states can be correct. The version of swift that is running is quite vintage with 'customizations'. Varnish is however not used for authenticated requests (boss-backup, thus duplicity), that is bypassed directly to the swift proxy servers.
I've got no control over swift or the backend, since that's a different team. I do however have around 5% of over 8000 duplicity backups reporting this issue every day.
It would make duplicity more resillient, since both a retry of the version check, or a retry of the upload (as specified with numretries) would solve the issue. This backend might be wonky sometimes, and yes that should be fixed, but so might other backends. (https:/ /aws.amazon. com/s3/ faqs/ Amazon S3 Standard and Standard - IA are designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.).