Introduction

In this article, we guide you through the process of how to set up and use auto backup and script which uploads important node data to Amazon S3 instance.

Step 1/3: Creating Amazon S3 instance

The first thing to do is to create an Amazon S3 instance. Before you can upload data to Amazon S3, you must create a bucket in one of the AWS regions to store your data. After you create a bucket, you can upload an unlimited number of data objects to the bucket. In order to set up auto backup and upload script you need to create a new access key on your AWS owner bucket account.

Step 2/3: Creating a bash script

Create a bash script with nano <scriptname> (in this example, we're gonna use aws-upload.sh as the script name, but feel free to use whatever you'd like) and paste the following command to create a backup and upload files to Amazon S3 instance:

docker exec otnode node scripts/backup-upload-aws.js --config=/ot-node/.origintrail_noderc --configDir=/ot-node/data --backupDirectory=/ot-node/backup --AWSAccessKeyId=YOUR_AWS_ACCESS_KEY_ID --AWSSecretAccessKey=YOUR_AWS_SECRET_ACCESS_KEY --AWSBucketName=YOUR_AWS_BUCKET_NAME

⚠️ Be sure to swap YOUR_AWS_ACCESS_KEY_ID, YOUR_AWS_SECRET_ACCESS_KEY, and YOUR_AWS_BUCKET_NAME with your values.

Once you have your script you need to change its permissions to be executable with

chmod +x aws-upload.sh. Now you can run it with

./aws-upload.sh

This command should show something similar to the following image

Step 3/3: Schedule the script as a cron job

Create a cron job by running

crontab -e

And adding to the end of the file something like this:

0 0 * * 0 ~/aws-upload.sh > ~/aws-upload-log.txt 2>&1

The above command would execute weekly, but if you wish to create and upload backups on a different schedule, we suggest using crontab.guru to determine the format for your desired schedule.

That's it! You've successfully scheduled a job to run backup and upload script weekly.

Did this answer your question?