Automate MySQL Database Backup to Amazon AWS S3 (using PHP)
- Digital Engineering
Automate MySQL Database Backup to Amazon AWS S3 (using PHP)
Databases.
A database is an organized collection of data. It is the collection of schemas, tables, queries, reports, views and other objects. The data is typically organized to model aspects of reality in a way that supports processes requiring information, such as modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies. – Wikipedia
Generally, databases are frequently updated. Dynamic nature of databases makes it inefficient to take manual backups regularly. Sure you can create a mysqldump on the server itself with a simple script, but what if the server crashes?
To deal with this issue, you can create a backup of database and upload it to a remote (and reliable) server regularly – using script. We’ll use AWS S3 for this tutorial as it’s reliable and easy to set up. Let’s start with the backup function first.
1 2 3 4 5 6 7 8 9 10 |
function LetsTakeBackup(){ $filename = time() . '_' . 'db-backup.sql.bz2'; $backup_directory = 'path/to/backup/directory'; $cmd = 'mysqldump -h ' . DB_HOSTNAME . ' -u ' . DB_USERNAME . ' -p ' . DB_PASSWORD . ' ' . DATABASE_NAME . ' | bzip2 > ' . $backup_directory . $filename; exec($cmd); } |
There. Database backup file is generated and saved in the specified directory on the server itself. Do note that filename is prefixed with current timestamp using PHP time() function. We’ll use this timestamp to delete backup files older than a certain time period. Now we just need to move the file to AWS S3 server.
Get AWS SDK
Download AWS SDK for PHP from Amazon and include it in your project: http://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html
The PHP Class
Here’s a small piece of code, and that’s all you need to get the script up and running. Make modifications as per your requirements.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
require_once( '/path/to/aws-autoloader.php' ); use Aws\Common\Aws; // AWS key define( 'AWS_KEY', 'YOUR AWS ACCOUNT KEY' ); // AWS secret define( 'AWS_SECRET', 'YOUR AWS ACCOUNT SECRET KEY' ); // AWS bucket to use (bucket is created if it doesn't exist) define( 'BUCKET', 'BUCKET NAME' ); // backup limit define( 'BACKUP_LIMIT', 'BACKUP TIME LIMIT' ); // eg. -1 day, -2 weeks, -1 month // file size/upload limit in bytes define( 'FILE_SIZE_LIMIT', 'MAX FILE SIZE IN BYTES' ); class UploadToAWS { function movefile( $fileName, $filePath ) { // directory path of backup files define( 'DB_DIRECTORY', $filePath . '/' ); // source file to upload define( 'SOURCE_FILE', $fileName ); $this->check_file_exists( SOURCE_FILE ); $aws = Aws::factory( array( 'credentials' => array( 'key' => AWS_KEY, 'secret' => AWS_SECRET ) ) ); $s3 = $aws->get('S3'); $this->check_bucket_exists( $s3 ); $this->upload_file( $s3, SOURCE_FILE ); $files = $this->get_files_in_bucket( $s3 ); if ( !empty( $files ) ) { $this->delete_files( $s3, $files ); } } // check if db backup file exists private function check_file_exists( $file = null ) { if ( !file_exists( DB_DIRECTORY . @$file ) || empty( $file ) ) { die( 'ERROR: File not found' ); } if ( is_dir( $file ) == true ) { die( 'ERROR: Please select a valid file.' ); } $filesize = filesize( DB_DIRECTORY . $file ); if ( @$filesize > FILE_SIZE_LIMIT ) { die( 'ERROR: File too large.' ); } } // check if bucket exists, if not then create one private function check_bucket_exists( &$s3 ) { $bucketexists = false; $buckets = $s3->listBuckets(); foreach ( $buckets['Buckets'] as $bucket ) { if ( $bucket['Name'] == BUCKET ) { $bucketexists = true; } } if ( $bucketexists == false ) { $s3->createBucket( array( 'Bucket' => BUCKET ) ); } $s3->waitUntil( 'BucketExists', array( 'Bucket' => BUCKET ) ); if ( $bucketexists == false ) { // bucket created action } } // upload file in parts private function upload_file( &$s3, $source_file ) { $files = $this->get_files_in_bucket( $s3 ); if ( !empty( $files ) ) { foreach ( $files as $file ) { if ( $file == $source_file ) { // file with same name already exists return false; } } } $filename = $source_file; $filesize = filesize( DB_DIRECTORY . $source_file ) . ' Bytes'; // Create a new multipart upload and get the upload ID. $fileupload = $s3->createMultipartUpload( array( 'Bucket' => BUCKET, 'Key' => $filename, 'StorageClass' => 'REDUCED_REDUNDANCY', ) ); $uploadId = $fileupload['UploadId']; // Upload the file in parts. try { $file = fopen( DB_DIRECTORY . $source_file, 'r' ); $parts = array(); $partNumber = 1; while ( !feof( $file ) ) { $fileupload = $s3->uploadPart( array( 'Bucket' => BUCKET, 'Key' => $filename, 'UploadId' => $uploadId, 'PartNumber' => $partNumber, 'Body' => fread( $file, 5 * 1024 * 1024 ) ) ); $parts[] = array( 'PartNumber' => $partNumber++, 'ETag' => $fileupload['ETag'] ); } fclose( $file ); } catch ( S3Exception $e ) { $fileupload = $s3->abortMultipartUpload( array( 'Bucket' => BUCKET, 'Key' => $filename, 'UploadId' => $uploadId ) ); } // Complete multipart upload. $fileupload = $s3->completeMultipartUpload( array( 'Bucket' => BUCKET, 'Key' => $filename, 'UploadId' => $uploadId, 'Parts' => $parts ) ); } private function get_files_in_bucket( &$s3 ) { $files = ''; $iterator = $s3->getIterator( 'ListObjects', array( 'Bucket' => BUCKET ) ); foreach ( $iterator as $object ) { $files[] = $object['Key']; } return $files; } private function delete_files( &$s3, $files ) { foreach ( $files as $key=>$file ) { $dbtime = substr( $file, 0, ( strripos( $file, '_' ) ) ); if ( $dbtime < strtotime( BACKUP_LIMIT, time() ) ) { unset( $files[$key] ); if ( sizeof($files) < 2 ) { // number of backup files is less than 2, deletion stopped. return false; } $deletefile = $s3->deleteObject( array( 'Bucket' => BUCKET, 'Key' => $file ) ); } } } } |
This class is used for:
- Check if the file you’re trying to upload is valid and not too large.
- Check if AWS Bucket exists, if not then create one. Buckets on AWS S3 are like directories.
- Get all files list in the bucket, use timestamp prefix to delete files older than specified limit. This is important to keep total size of backups on S3 in control.
- And of course, upload the file to AWS S3 using multipart.
You can also create a function to delete local backup file after upload is completed successfully.
Instantiate UploadToAWS
1 2 |
$AWS = new UploadToAWS(); $AWS->movefile( $fileName, $filePath ); |
Related content
Auriga: Leveling Up for Enterprise Growth!
Auriga’s journey began in 2010 crafting products for India’s