B2
NAME:
singularity storage update b2 - Backblaze B2
USAGE:
singularity storage update b2 [command options] <name|id>
DESCRIPTION:
--account
Account ID or Application Key ID.
--key
Application Key.
--endpoint
Endpoint for the service.
Leave blank normally.
--test-mode
A flag string for X-Bz-Test-Mode header for debugging.
This is for debugging purposes only. Setting it to one of the strings
below will cause b2 to return specific errors:
* "fail_some_uploads"
* "expire_some_account_authorization_tokens"
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/docs/cloud-storage-integration-checklist).
--versions
Include old versions in directory listings.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
--version-at
Show file versions as they were at the specified time.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
--hard-delete
Permanently delete files on remote removal, otherwise hide files.
--upload-cutoff
Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657 GiB (== 5 GB).
--copy-cutoff
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 4.6 GiB.
--chunk-size
Upload chunk size.
When uploading large files, chunk the file into this size.
Must fit in memory. These chunks are buffered in memory and there
might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.
--upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--b2-upload-concurrency" chunks stored at once
in memory.
--disable-checksum
Disable checksums for large (> upload cutoff) files.
Normally rclone will calculate the SHA1 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
--download-url
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Rclone works with private buckets by sending an "Authorization" header.
If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")
--download-auth-duration
Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
This is used in combination with "rclone link" for making files
accessible to the public and sets the duration before the download
authorization token will expire.
The minimum value is 1 second. The maximum value is one week.
--memory-pool-flush-time
How often internal memory buffer pools will be flushed. (no longer used)
--memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool. (no longer used)
--lifecycle
Set the number of days deleted files should be kept when creating a bucket.
On bucket creation, this parameter is used to create a lifecycle rule
for the entire bucket.
If lifecycle is 0 (the default) it does not create a lifecycle rule so
the default B2 behaviour applies. This is to create versions of files
on delete and overwrite and to keep them indefinitely.
If lifecycle is >0 then it creates a single rule setting the number of
days before a file that is deleted or overwritten is deleted
permanently. This is known as daysFromHidingToDeleting in the b2 docs.
The minimum value for this parameter is 1 day.
You can also enable hard_delete in the config also which will mean
deletions won't cause versions but overwrites will still cause
versions to be made.
See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
--encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
--description
Description of the remote.
OPTIONS:
--account value Account ID or Application Key ID. [$ACCOUNT]
--hard-delete Permanently delete files on remote removal, otherwise hide files. (default: false) [$HARD_DELETE]
--help, -h show help
--key value Application Key. [$KEY]
Advanced
--chunk-size value Upload chunk size. (default: "96Mi") [$CHUNK_SIZE]
--copy-cutoff value Cutoff for switching to multipart copy. (default: "4Gi") [$COPY_CUTOFF]
--description value Description of the remote. [$DESCRIPTION]
--disable-checksum Disable checksums for large (> upload cutoff) files. (default: false) [$DISABLE_CHECKSUM]
--download-auth-duration value Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. (default: "1w") [$DOWNLOAD_AUTH_DURATION]
--download-url value Custom endpoint for downloads. [$DOWNLOAD_URL]
--encoding value The encoding for the backend. (default: "Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot") [$ENCODING]
--endpoint value Endpoint for the service. [$ENDPOINT]
--lifecycle value Set the number of days deleted files should be kept when creating a bucket. (default: 0) [$LIFECYCLE]
--memory-pool-flush-time value How often internal memory buffer pools will be flushed. (no longer used) (default: "1m0s") [$MEMORY_POOL_FLUSH_TIME]
--memory-pool-use-mmap Whether to use mmap buffers in internal memory pool. (no longer used) (default: false) [$MEMORY_POOL_USE_MMAP]
--test-mode value A flag string for X-Bz-Test-Mode header for debugging. [$TEST_MODE]
--upload-concurrency value Concurrency for multipart uploads. (default: 4) [$UPLOAD_CONCURRENCY]
--upload-cutoff value Cutoff for switching to chunked upload. (default: "200Mi") [$UPLOAD_CUTOFF]
--version-at value Show file versions as they were at the specified time. (default: "off") [$VERSION_AT]
--versions Include old versions in directory listings. (default: false) [$VERSIONS]
Client Config
--client-ca-cert value Path to CA certificate used to verify servers. To remove, use empty string.
--client-cert value Path to Client SSL certificate (PEM) for mutual TLS auth. To remove, use empty string.
--client-connect-timeout value HTTP Client Connect timeout (default: 1m0s)
--client-expect-continue-timeout value Timeout when using expect / 100-continue in HTTP (default: 1s)
--client-header value [ --client-header value ] Set HTTP header for all transactions (i.e. key=value). This will replace the existing header values. To remove a header, use --http-header "key="". To remove all headers, use --http-header ""
--client-insecure-skip-verify Do not verify the server SSL certificate (insecure) (default: false)
--client-key value Path to Client SSL private key (PEM) for mutual TLS auth. To remove, use empty string.
--client-no-gzip Don't set Accept-Encoding: gzip (default: false)
--client-scan-concurrency value Max number of concurrent listing requests when scanning data source (default: 1)
--client-timeout value IO idle timeout (default: 5m0s)
--client-use-server-mod-time Use server modified time if possible (default: false)
--client-user-agent value Set the user-agent to a specified string. To remove, use empty string. (default: rclone default)
Retry Strategy
--client-low-level-retries value Maximum number of retries for low-level client errors (default: 10)
--client-retry-backoff value The constant delay backoff for retrying IO read errors (default: 1s)
--client-retry-backoff-exp value The exponential delay backoff for retrying IO read errors (default: 1.0)
--client-retry-delay value The initial delay before retrying IO read errors (default: 1s)
--client-retry-max value Max number of retries for IO read errors (default: 10)
--client-skip-inaccessible Skip inaccessible files when opening (default: false)
Last updated
Was this helpful?