StorX Integration Series: StorX + Rclone
Introduction:
Restic is a highly secure and efficient backup client written in Go. Each Restic backup is a snapshot of the server/files/directory, deduplicated from what was stored before. Restoring a given backup will restore the server/files/directories to their exact state at that time.
This is a quick-start tutorial that covers Restic usage with Storx.
This guide will cover only some of the tool's basic features. The complete documentation for Restic is located here at their complete command reference.
Installation
RCLONE
Rclone is needed to be able to interact with StorX using restic.
Install Rclone from source.
If you have Golang installed and $GOPATH/bin added to $PATH, you can install Rclone from the source using the following command. The command works for all the operating systems for Desktop and Laptop platforms
go install github.com/rclone/rclone@latest
If the bin folder of $GOPATH is added to $PATH, you can run Rclone from the terminal using the following command.
updated as of 4th July 2024 below:--
What can rclone do for you?
Rclone helps you:
· Backup (and encrypt) files to cloud storage
· Restore (and decrypt) files from cloud storage
· Mirror cloud data to other cloud services or locally
· Migrate data to the cloud or between cloud storage vendors
· Mount multiple, encrypted, cached, or diverse cloud storage as a disk
· Analyse and account for data held on cloud storage using lsf, ljson, size, ncdu
· Union file systems together to present multiple local and cloud file systems as one
Script download and install
To install Rclone on Linux/macOS/BSD systems, run the following:
Rclone
Download the precompiled binaries
- Go to download page and download the latest binary for Rclone.
2. Unzip the downloaded file.
3. Open the terminal from the Rclone folder.
4. You can run Rclone from the command line.
5. You can add the binary's location to the path to make it easier to run in the future.
Alternative Rclone installations:
For alternative installation instructions, see the documentation.
Backing up data
Get API key and secret:
- From the top-left corner of the storx dashboard, click the account button and a drop-down menu will appear.
2. Click access.
3. Create s3 credentials.
4. After successful creation, click download all, then finish.
5. The downloaded file will have all the details you need to backup your files to the storx from your desktop.
JSON
Access Key: jucp2o2qXXXXXXXXXXXXXXXXXXXX
SecretKey: j3lolq5453ktsnnjfXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Endpoint: https://gateway.storx.io
Configuring Rclone
Edit the Rclone config file directly; you can find where it is stored by running the following:
· Run Rclone config to set up. See rclone config docs for more details.
To create a new repository via the terminal, follow the steps below:-
Rclone config
Current remotes:
Name Type
==== ====
DEFAULT
remote s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
Enter name for new remote.
name> storx
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your value.
1 / 1Fichier
\ (fichier)
2 / Akamai NetStorage
\ (netstorage)
3 / Alias for an existing remote
\ (alias)
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
\ (s3)
5 / Backblaze B2
\ (b2)
6 / Better checksums for other remotes
\ (hasher)
7 / Box
\ (box)
8 / Cache a remote
\ (cache)
9 / Citrix Sharefile
\ (sharefile)
10 / Combine several remotes into one
\ (combine)
11 / Compress a remote
\ (compress)
12 / Dropbox
\ (dropbox)
13 / Encrypt/Decrypt a remote
\ (crypt)
14 / Enterprise File Fabric
\ (filefabric)
15 / FTP
\ (ftp)
16 / Google Cloud Storage (this is not Google Drive)
\ (google cloud storage)
17 / Google Drive
\ (drive)
18 / Google Photos
\ (google photos)
19 / HTTP
\ (http)
20 / Hadoop distributed file system
\ (hdfs)
21 / HiDrive
\ (hidrive)
22 / ImageKit.io
\ (imagekit)
23 / In memory object storage system.
\ (memory)
24 / Internet Archive
\ (internetarchive)
25 / Jottacloud
\ (jottacloud)
26 / Koofr, Digi Storage and other Koofr-compatible storage providers
\ (koofr)
27 / Linkbox
\ (linkbox)
28 / Local Disk
\ (local)
29 / Mail.ru Cloud
\ (mailru)
30 / Mega
\ (mega)
31 / Microsoft Azure Blob Storage
\ (azureblob)
32 / Microsoft Azure Files
\ (azurefiles)
33 / Microsoft OneDrive
\ (onedrive)
34 / OpenDrive
\ (opendrive)
35 / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
\ (swift)
36 / Oracle Cloud Infrastructure Object Storage
\ (oracleobjectstorage)
37 / Pcloud
\ (pcloud)
38 / PikPak
\ (pikpak)
39 / Proton Drive
\ (protondrive)
40 / Put.io
\ (putio)
41 / QingCloud Object Storage
\ (qingstor)
42 / Quatrix by Maytech
\ (quatrix)
43 / SMB / CIFS
\ (smb)
44 / SSH/SFTP
\ (sftp)
45 / Sia Decentralized Cloud
\ (sia)
46 / Storj Decentralized Cloud Storage
\ (storj)
47 / Sugarsync
\ (sugarsync)
48 / Transparently chunk/split large files
\ (chunker)
49 / Uloz.to
\ (ulozto)
50 / Union merges the contents of several upstream fs
\ (union)
51 / Uptobox
\ (uptobox)
52 / WebDAV
\ (webdav)
53 / Yandex Disk
\ (yandex)
54 / Zoho
\ (zoho)
55 / premiumize.me
\ (premiumizeme)
56 / seafile
\ (seafile)
Storage> 4
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amazon Web Services (AWS) S3
\ (AWS)
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ (Alibaba)
3 / Arvan Cloud Object Storage (AOS)
\ (ArvanCloud)
4 / Ceph Object Storage
\ (Ceph)
5 / China Mobile Ecloud Elastic Object Storage (EOS)
\ (ChinaMobile)
6 / Cloudflare R2 Storage
\ (Cloudflare)
7 / DigitalOcean Spaces
\ (DigitalOcean)
8 / Dreamhost DreamObjects
\ (Dreamhost)
9 / Google Cloud Storage
\ (GCS)
10 / Huawei Object Storage Service
\ (HuaweiOBS)
11 / IBM COS S3
\ (IBMCOS)
12 / IDrive e2
\ (IDrive)
13 / IONOS Cloud
\ (IONOS)
14 / Seagate Lyve Cloud
\ (LyveCloud)
15 / Leviia Object Storage
\ (Leviia)
16 / Liara Object Storage
\ (Liara)
17 / Linode Object Storage
\ (Linode)
18 / Magalu Object Storage
\ (Magalu)
19 / Minio Object Storage
\ (Minio)
20 / Netease Object Storage (NOS)
\ (Netease)
21 / Petabox Object Storage
\ (Petabox)
22 / RackCorp Object Storage
\ (RackCorp)
23 / Rclone S3 Server
\ (Rclone)
24 / Scaleway Object Storage
\ (Scaleway)
25 / SeaweedFS S3
\ (SeaweedFS)
26 / StackPath Object Storage
\ (StackPath)
27 / Storj (S3 Compatible Gateway)
\ (Storj)
28 / Synology C2 Object Storage
\ (Synology)
29 / Tencent Cloud Object Storage (COS)
\ (TencentCOS)
30 / Wasabi Object Storage
\ (Wasabi)
31 / Qiniu Object Storage (Kodo)
\ (Qiniu)
32 / Any other S3 compatible provider
\ (Other)
provider> 1
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id>
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key>
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ The default endpoint — a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ (us-east-1)
/ US East (Ohio) Region.
2 | Needs location constraint us-east-2.
\ (us-east-2)
/ US West (Northern California) Region.
3 | Needs location constraint us-west-1.
\ (us-west-1)
/ US West (Oregon) Region.
4 | Needs location constraint us-west-2.
\ (us-west-2)
/ Canada (Central) Region.
5 | Needs location constraint ca-central-1.
\ (ca-central-1)
/ EU (Ireland) Region.
6 | Needs location constraint EU or eu-west-1.
\ (eu-west-1)
/ EU (London) Region.
7 | Needs location constraint eu-west-2.
\ (eu-west-2)
/ EU (Paris) Region.
8 | Needs location constraint eu-west-3.
\ (eu-west-3)
/ EU (Stockholm) Region.
9 | Needs location constraint eu-north-1.
\ (eu-north-1)
/ EU (Milan) Region.
10 | Needs location constraint eu-south-1.
\ (eu-south-1)
/ EU (Frankfurt) Region.
11 | Needs location constraint eu-central-1.
\ (eu-central-1)
/ Asia Pacific (Singapore) Region.
12 | Needs location constraint ap-southeast-1.
\ (ap-southeast-1)
/ Asia Pacific (Sydney) Region.
13 | Needs location constraint ap-southeast-2.
\ (ap-southeast-2)
/ Asia Pacific (Tokyo) Region.
14 | Needs location constraint ap-northeast-1.
\ (ap-northeast-1)
/ Asia Pacific (Seoul).
15 | Needs location constraint ap-northeast-2.
\ (ap-northeast-2)
/ Asia Pacific (Osaka-Local).
16 | Needs location constraint ap-northeast-3.
\ (ap-northeast-3)
/ Asia Pacific (Mumbai).
17 | Needs location constraint ap-south-1.
\ (ap-south-1)
/ Asia Pacific (Hong Kong) Region.
18 | Needs location constraint ap-east-1.
\ (ap-east-1)
/ South America (Sao Paulo) Region.
19 | Needs location constraint sa-east-1.
\ (sa-east-1)
/ Israel (Tel Aviv) Region.
20 | Needs location constraint il-central-1.
\ (il-central-1)
/ Middle East (Bahrain) Region.
21 | Needs location constraint me-south-1.
\ (me-south-1)
/ Africa (Cape Town) Region.
22 | Needs location constraint af-south-1.
\ (af-south-1)
/ China (Beijing) Region.
23 | Needs location constraint cn-north-1.
\ (cn-north-1)
/ China (Ningxia) Region.
24 | Needs location constraint cn-northwest-1.
\ (cn-northwest-1)
/ AWS GovCloud (US-East) Region.
25 | Needs location constraint us-gov-east-1.
\ (us-gov-east-1)
/ AWS GovCloud (US) Region.
26 | Needs location constraint us-gov-west-1.
\ (us-gov-west-1)
region> 1
Option endpoint.
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Enter a value. Press Enter to leave empty.
endpoint>
Option location_constraint.
Location constraint — must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Empty for US Region, Northern Virginia, or Pacific Northwest
\ ()
2 / US East (Ohio) Region
\ (us-east-2)
3 / US West (Northern California) Region
\ (us-west-1)
4 / US West (Oregon) Region
\ (us-west-2)
5 / Canada (Central) Region
\ (ca-central-1)
6 / EU (Ireland) Region
\ (eu-west-1)
7 / EU (London) Region
\ (eu-west-2)
8 / EU (Paris) Region
\ (eu-west-3)
9 / EU (Stockholm) Region
\ (eu-north-1)
10 / EU (Milan) Region
\ (eu-south-1)
11 / EU Region
\ (EU)
12 / Asia Pacific (Singapore) Region
\ (ap-southeast-1)
13 / Asia Pacific (Sydney) Region
\ (ap-southeast-2)
14 / Asia Pacific (Tokyo) Region
\ (ap-northeast-1)
15 / Asia Pacific (Seoul) Region
\ (ap-northeast-2)
16 / Asia Pacific (Osaka-Local) Region
\ (ap-northeast-3)
17 / Asia Pacific (Mumbai) Region
\ (ap-south-1)
18 / Asia Pacific (Hong Kong) Region
\ (ap-east-1)
19 / South America (Sao Paulo) Region
\ (sa-east-1)
20 / Israel (Tel Aviv) Region
\ (il-central-1)
21 / Middle East (Bahrain) Region
\ (me-south-1)
22 / Africa (Cape Town) Region
\ (af-south-1)
23 / China (Beijing) Region
\ (cn-north-1)
24 / China (Ningxia) Region
\ (cn-northwest-1)
25 / AWS GovCloud (US-East) Region
\ (us-gov-east-1)
26 / AWS GovCloud (US) Region
\ (us-gov-west-1)
location_constraint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn’t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects, as S3 doesn’t copy the ACL from the source but writes a fresh one.
If the acl is an empty string, then no X-Amz-Acl: header is added, and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object and bucket owners get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl> 1
Option server_side_encryption.
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / None
\ ()
2 / AES256
\ (AES256)
3 / aws:kms
\ (aws:kms)
server_side_encryption> 1
Option sse_kms_key_id.
If using KMS ID, you must provide the ARN of the Key.
Choose a number from below, or type in your value.
Press Enter to leave empty.
1 / None
\ ()
2 / arn:aws:kms:*
\ (arn:aws:kms:us-east-1:*)
sse_kms_key_id>
Option storage_class.
The storage class to use when storing new objects in S3.
Choose a number from below, or type in your value.
Press Enter to leave empty.
1 / Default
\ ()
2 / Standard storage class
\ (STANDARD)
3 / Reduced redundancy storage class
\ (REDUCED_REDUNDANCY)
4 / Standard Infrequent Access storage class
\ (STANDARD_IA)
5 / One Zone Infrequent Access storage class
\ (ONEZONE_IA)
6 / Glacier storage class
\ (GLACIER)
7 / Glacier Deep Archive storage class
\ (DEEP_ARCHIVE)
8 / Intelligent-Tiering storage class
\ (INTELLIGENT_TIERING)
9 / Glacier Instant Retrieval storage class
\ (GLACIER_IR)
storage_class> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: AWS
- region: us-east-1
- acl: private
Keep this “storx” remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
· Once you have completed the repository creation, you can exit the above, enter ctrl+c, and exit.
· Run Rclone -h in the terminal.
(Or)
· Run the below command to find the config file and update it: —
Rclone config file
In clone.conf, set the access_key_id and secret_access_key with the S3-compatible credentials created above.
To edit the config file, open it using your preferred text editor or from your terminal
nano C:\Users\user\AppData\Roaming\rclone\rclone.conf # replace with your path to the config fi
Use the following config
[storx] type = s3
provider = Storx
access_key_id = jucp2o2qygzqois2ro5nowi4nvea # REPLACE ME
secret_access_key = j3lolq5453ktsnnjf73ci5px6y23vaos7d43hrz7ucxw6j47mtfj4 # REPLACE ME endpoint = gateway.storx.io
chunk_size = 64Mi disable_checksum: true
Remove a bucket
Rclone rmdir storx:restic-test-bucket/ — →>>(Note:-Put forward slash)
(If the slash is removed from the path, we will get the following error: — (caused by: Put "https://test2.gateway.storx.io/": dial tcp: lookup test2.gateway.storx.io on 127.0.0.00:00: no such host))
List vault using the command:-
Rclone lsf storx:
Now, to create the vault: —
Rclone mkdir storx:restic-test-bucket/
List the file inside the vault.
Rclone ls storx:qqq/
The bucket should be listed in the StorX dashboard.
You can find the changes in the web application once you log in to the account.
About StorX:
StorX is a decentralized cloud storage network that empowers users to store their data securely in the cloud. Each file uploaded on StorX is split and encrypted into multiple fragments to autonomous storage nodes operated by individual operators worldwide. Designed as a collection of independent storage networks, no particular operator has complete access to your data. StorX is faster than legacy centralized storage providers and allows users to save substantial costs compared to a centralized cloud. StorX enables users with spare storage capacity to lease space and earn great returns in SRX tokens.
$SRX is listed on multiple-tier exchanges like HitBTC, Liquid, LCX, Coinstore, Bitmart and Bitrue. To know more about StorX Network, Visit https://storx.tech
Don’t forget to follow us on our social channels: