Home > Amazon Web Services, Backup, Jamf Pro, Linux, Scripting > Backing up a Jamf Pro database hosted in Amazon Web Services’ RDS service to an S3 bucket

Backing up a Jamf Pro database hosted in Amazon Web Services’ RDS service to an S3 bucket

For those using Amazon Web Services to host Jamf Pro, one of the issues you may run into is how to get backups of your Jamf Pro database which you can access. AWS’s RDS service makes backups of your database to S3, but you don’t get direct access to the S3 bucket where they’re stored.

In the event that you want a backup that you can access of your RDS-hosted MySQL database, Amazon provides the option for exporting a database snapshot to an S3 bucket in your AWS account. This process will export your data in Apache Parquet format instead of a MySQL database export file.

However, it’s also possible to create and use an EC2 instance to perform the following tasks:

  1. Connect to your RDS-hosted MySQL database.
  2. Create a backup of your MySQL database using the mysqldump tool.
  3. Store the backup in an S3 bucket of your choosing.

For more details, please see below the jump.

Setting up the backup server

In order to run the backups, you’ll need to set up several resources in AWS. This includes the following:

Please use the procedure below to create the necessary resources:

1. Create an S3 bucket to store your MySQL backups in.

2. Set up an IAM role which allows an EC2 instance to have read/write access to the S3 bucket where you’ll be storing the backups.

3. Create an EC2 instance running Linux.

Note: This instance will need to have enough free space to store a complete backup of your database, so I recommend looking at the size of your database and choose the appropriate amount of disk space when you’re setting up the new instance.

4. Install the following tools on your Linux EC2 instance:

5. Attach the IAM role to your EC2 instance.

6. Create a VPC Security Group which allows your EC2 instance and RDS-hosted database to successfully communicate with each other.

Note: If you’re running Jamf Pro in AWS and you’re hosting your database in RDS, you likely have a security group like this set up already. Otherwise, your Jamf Pro server wouldn’t be able to communicate with the database.

7. Add the EC2 instance to the VPC Security Group which allows access to your RDS database.

Once all of the preparation work has been completed, use the following procedure to set up the backup process:

Note: For the purposes of this post, I’m using Red Hat Enterprise Linux (RHEL) as the Linux distro. If using another Linux distro, be aware that you may need to make adjustments for application binaries being stored in different locations than they are on RHEL.

Setting up MySQL authentication

1. Log into your EC2 instance.

2. Run the following command to change to a shell which has root privileges.

sudo -s

3. Create a MySQL connection named local using a command similar to the one below:

mysql_config_editor set --login-path=local --host=rds.database.server.url.goes.here --user=username --password

You’ll then be prompted for the password to the Jamf Pro database.

For example, if your Jamf Pro database has the following RDS URL and username:

  • URL: jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com
  • Username: jamfsw03

The following command would be used to create the MySQL connection:

mysql_config_editor set --login-path=local --host=jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com --user=jamfsw03 --password

Running this command should create a file named .mylogin.cnf in root’s home directory. To see the contents of the MySQL connection file and verify that it’s set up correctly, run the following command:

mysql_config_editor print --login-path=local

That should produce output which looks similar to what’s shown below:

user = jamfsw03
password = *****
host = jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com

Note: The reason for creating the MySQL connection is so we don’t need to store the database password as plaintext in the script.

Creating the backup script

1. Once the MySQL connection has been created, copy the script below and store it as /usr/local/bin/aws_mysql_database_backup.sh.

This script has several variables that will need to be edited. For example, if your Jamf Pro database is named jamfprodb, the S3 bucket you created is named jamfpro-database-backup and the MySQL connection you set up is named local, the following variables would look like this:

# Enter name of the RDS database being backed up


# Enter name of the S3 bucket


# Enter the MySQL connection name


# Enter name of the RDS database being backed up
# Enter name of the S3 bucket
# Enter the MySQL connection name
# These variables don't need to be edited
log_name="jamfprobackup-database-backup-$(date +'%Y%m%d%H%M%S').log"
database_mysqldump="jamfprobackup-database-backup-$(date +'%Y%m%d%H%M%S').sql.gz"
# Get applicable AWS region from EC2 instance that the script is running on.
aws_region=$(/bin/curl -s | sed "s/.$//g")
DATE=`date +%Y-%m-%d\ %H:%M:%S`
/usr/bin/echo "$DATE" " $1" >> $LOG
# Creates a database backup using the mysqldump tool and stores the backup in the /tmp directory
ScriptLogging "Creating backup of database to $database_mysqldump"
/usr/bin/mysqldump –login-path="$mysql_connection_name" –max-allowed-packet=1024M –single-transaction –routines –triggers –databases "$database_name" | /usr/bin/gzip -9 > /tmp/"$database_mysqldump"
# The "backupstatus" variable checks the mysqldump command's exit status
backupstatus=`echo ${PIPESTATUS[0]}`
# If the mysqldump command completed successfully and if the database backup exists,
# the script continues. Otherwise, the script exits with an error.
if [[ "$backupstatus" -eq 0 ]] && [[ -f /tmp/"$database_mysqldump" ]]; then
ScriptLogging "Backup created successfully."
# Upload backup failure log to S3 bucket.
ScriptLogging "Backup not successfully created. Removing any files created and exiting with error."
/usr/bin/aws s3 cp "$log_location" s3://"$S3_bucket"/"$log_name" –region "$aws_region"
if [[ -f /tmp/"$database_mysqldump" ]]; then
/usr/bin/rm /tmp/"$database_mysqldump"
exit 1
# Copies database backup.
ScriptLogging "Uploading database backup to the following S3 bucket: $S3_bucket"
/usr/bin/aws s3 cp /tmp/"$database_mysqldump" s3://"$S3_bucket"/"$database_mysqldump" –region "$aws_region"
ScriptLogging "Removing the backup file $database_mysqldump from /tmp"
/usr/bin/rm /tmp/"$database_mysqldump"
ScriptLogging "Backup process completed."
# Uploading backup log to S3 bucket.
ScriptLogging "Uploading database backup log to the following S3 bucket: $S3_bucket"
/usr/bin/aws s3 cp "$log_location" s3://"$S3_bucket"/"$log_name" –region "$aws_region"

This script is also available via the link below:


2. Make the script executable by running the following command with root privileges:

chmod 755 /usr/local/bin/aws_mysql_database_backup.sh

3. Ensure that root owns the file by running the following command with root privileges:

chown root:root /usr/local/bin/aws_mysql_database_backup.sh

Note: The mysqldump command used in the script is set up with the following options:

  • – -max-allowed-packet=1024M
  • – -single-transaction
  • – -routines
  • – -triggers

– -max-allowed-packet=1024M: This specifies a max_allowed_packet value of 1 GB for mysqldump. This allows the packet buffer limit for mysqldump to grow beyond its default 4 MB limit to the 1 GB limit specified by the max_allowed_packet value.

– -single-transaction: Generates a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

– -routines: Dumps all stored procedures and stored functions.

– -triggers: Dumps all triggers for each table that has them.

These options are designed for use with InnoDB tables and provides an exact point-in-time snapshot of the data in the database. These options also do not require the MySQL tables to be locked, which in turn allows the Jamf Pro database to continue to work normally while the backup is taking place.

Scheduling the database backup:

You can set up a nightly database backup using cron. For example, if you wanted to set up a database backup to run daily at 11:30 PM, you can use the procedure below to set that up.

1. Export existing crontab by running the following command with root privileges:

crontab -l > /tmp/crontab_export

2. Export new crontab entry to exported crontab file by running the following command with root privileges:

echo "30 23 * * * /usr/local/bin/mysql_database_backup.sh 2>&1" >> /tmp/crontab_export

3. Install new cron file using exported crontab file by running the following command with root privileges:

crontab /tmp/crontab_export


Once everything is set up and ready to go, you should see your database backups and associated logs begin to appear in your S3 bucket.

Screen Shot 2020-02-15 at 10.52.51 PM

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: