tekkenking / dbbackupman
Cross-DB backups (PostgreSQL/MySQL/MariaDB) with uploads, retention, and incremental options.
Installs: 3
Dependents: 0
Suggesters: 0
Security: 0
Stars: 0
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/tekkenking/dbbackupman
Requires
- php: ^8.1|^8.2|^8.3
- illuminate/console: ^10.0|^11.0|^12.0
- illuminate/filesystem: ^10.0|^11.0|^12.0
- illuminate/support: ^10.0|^11.0|^12.0
- nesbot/carbon: ^2.0
- symfony/process: ^6.4|^7.0
Requires (Dev)
- mockery/mockery: ^1.6
- orchestra/testbench: ^8.0|^9.0|^10.0
- phpunit/phpunit: ^10.5|^11.0
This package is auto-updated.
Last update: 2025-12-25 10:44:51 UTC
README
Cross-DB backups for Laravel with uploads, incremental modes, retention, and per-disk remote paths.
- Databases: PostgreSQL, MySQL, MariaDB
- Modes:
full,schema,incremental - Uploads: Multiple filesystem disks; per-disk remote paths (empty path → bucket root)
- Retention: Keep N latest sets and/or delete sets older than D days
- Laravel: 10 · 11 · 12 (PHP 8.1+)
composer package: tekkenking/dbbackupman
Namespace: Tekkenking\Dbbackupman
Table of Contents
- Requirements
- Install
- Configuration
- Command Usage
- Examples
- Uploads: Per-Disk Remote Paths
- Incremental Backups & State
- Restoring Backups
- Scheduling
- Using the Facade
- Testing the Package
- Troubleshooting
- Version Compatibility
- Security Notes
- Contributing
- License
Requirements
-
PHP 8.1+ (8.2/8.3 recommended)
-
Laravel 10 / 11 / 12
-
DB client tools available on the host that runs the command:
- PostgreSQL:
pg_dump,psql(andpg_dumpallif using--globals) - MySQL/MariaDB:
mysqldump,mysqlbinlog(for incremental)
- PostgreSQL:
-
Properly configured Laravel database connection(s) and filesystem disk(s)
The command preflights required tools and will error early if a binary is missing.
Install
composer require tekkenking/dbbackupman
php artisan vendor:publish --provider="Tekkenking\Dbbackupman\DbBackupServiceProvider" --tag=config
This publishes config/dbbackup.php.
Configuration
config/dbbackup.php:
<?php return [ 'tools' => [ 'pg_dump' => env('DBBACKUP_PG_DUMP', 'pg_dump'), 'pg_dumpall' => env('DBBACKUP_PG_DUMPALL', 'pg_dumpall'), 'psql' => env('DBBACKUP_PSQL', 'psql'), 'mysqldump' => env('DBBACKUP_MYSQLDUMP', 'mysqldump'), 'mysqlbinlog' => env('DBBACKUP_MYSQLBINLOG', 'mysqlbinlog'), ], 'upload' => [ 'disks' => [], // e.g. ['s3','wasabi'] 'remote_path' => '', // fallback base path when disk not in remote_map 'remote_map' => [ // per-disk base path; "" means bucket root // 's3' => 'backups/prod', // 'wasabi' => '', ], ], 'retention' => [ 'keep' => null, // keep last N sets 'days' => null, // delete sets older than D days ], ];
Define your filesystem disks in config/filesystems.php (e.g., S3/Wasabi/GCS).
Command Usage
Primary entry:
php artisan db:backup [options]
Core options
--connection=Laravel DB connection name (defaults todatabase.default)--driver=Forcepgsqlormysql/mariadb(usually inferred from the connection)--mode=full|schema|incremental--gzipCompress outputs (.gz)--out=Local output dir (default:storage/app/db-backups)
Uploads
-
--disks=CSV list of filesystem disks (e.g.s3,wasabi). If omitted, usesconfig('dbbackup.upload.disks'). -
--remote-path=Fallback base path for disks not in the map (can be empty to upload at bucket root) -
--remote-map=Per-disk base path; JSON or CSV:- JSON:
--remote-map='{"s3":"backups/prod","wasabi":""}' - CSV :
--remote-map=s3:backups/prod,wasabi:
- JSON:
Retention
--retention-keep=Keep latest N backup sets--retention-days=Delete sets older than D days Applied to local and each remote disk using manifest timestamps.
PostgreSQL-only
-
--per-schemaDump each non-system schema individually -
--include=CSV schemas to include (with--per-schema) -
--exclude=CSV schemas to exclude (with--per-schema) -
--globalsInclude roles & tablespaces viapg_dumpall -g -
--no-ownerOmit ownership -
Incremental:
--since=ISO8601 start time (fallback when no state found)--pg-csv-include=CSV table patterns (schema.table,*allowed)--pg-csv-exclude=CSV table patterns
MySQL-only
-
Incremental:
--incremental-type=binlog|updated_atType of incremental (default: binlog)--mysql-csv-include=CSV of tables to include (forupdated_attype)--mysql-csv-exclude=CSV of tables to exclude (forupdated_attype)
Incremental options (PostgreSQL and MySQL updated_at type)
--from-date=Start date for incremental exports (YYYY-MM-DD or YYYY-MM-DD HH:MM:SS)--to-date=End date for incremental exports (YYYY-MM-DD or YYYY-MM-DD HH:MM:SS, defaults to now)--incremental-format=csv|sqlFormat for incremental exports (default: csv)--incremental-output=separate|combinedOutput style: separate (one file per table) or combined (single file) (default: separate)
State (incremental)
--state-disk=Disk used to store state (default: first upload disk; else local)--state-path=Base path for state JSON (default:{diskBase}/_stateor_stateif base is empty)
Examples
PostgreSQL — full dump to S3 + Wasabi; keep 7, purge >30 days
php artisan db:backup \
--connection=pgsql \
--mode=full --gzip \
--disks=s3,wasabi \
--remote-map='{"s3":"backups/prod","wasabi":""}' \
--retention-keep=7 --retention-days=30
PostgreSQL — schema-only with per-schema dumps
php artisan db:backup \ --connection=pgsql \ --mode=schema --per-schema --gzip \ --exclude=pg_temp,pg_toast \ --disks=s3 \ --remote-path=backups/pg/schema
PostgreSQL — incremental via updated_at CSVs
php artisan db:backup \ --connection=pgsql \ --mode=incremental --gzip \ --pg-csv-include=public.orders,public.users \ --disks=s3 \ --remote-path=backups/pg/incr
Produces one CSV per table with rows where
updated_at > since. Does not capture deletes or schema changes — pair with periodic full backups.
PostgreSQL — incremental with date range and SQL format
# SQL format, separate files, specific date range php artisan db:backup \ --connection=pgsql \ --mode=incremental \ --from-date="2025-11-01" \ --to-date="2025-11-24" \ --incremental-format=sql \ --incremental-output=separate \ --pg-csv-include=public.orders,public.users \ --gzip \ --disks=s3 # CSV format, combined file php artisan db:backup \ --connection=pgsql \ --mode=incremental \ --from-date="2025-11-01 00:00:00" \ --incremental-format=csv \ --incremental-output=combined \ --pg-csv-include=public.* \ --disks=s3
New incremental options:
--from-date=Start date for exports (YYYY-MM-DD or YYYY-MM-DD HH:MM:SS)--to-date=End date for exports (defaults to now)--incremental-format=csv|sqlExport format (default: csv)--incremental-output=separate|combinedOutput style (default: separate)
MySQL/MariaDB — full dump
php artisan db:backup \ --connection=mysql \ --mode=full --gzip \ --disks=s3 \ --remote-path=backups/mysql/full
MySQL/MariaDB — incremental via binlog (default)
php artisan db:backup \ --connection=mysql \ --mode=incremental --gzip \ --disks=s3 \ --remote-path=backups/mysql/incr
MySQL/MariaDB — incremental via updated_at (new!)
# SQL format, separate files per table php artisan db:backup \ --connection=mysql \ --mode=incremental \ --incremental-type=updated_at \ --from-date="2025-11-01" \ --to-date="2025-11-24" \ --incremental-format=sql \ --incremental-output=separate \ --mysql-csv-include=orders,users,products \ --gzip \ --disks=s3 # CSV format, combined file php artisan db:backup \ --connection=mysql \ --mode=incremental \ --incremental-type=updated_at \ --from-date="2025-11-01 00:00:00" \ --incremental-format=csv \ --incremental-output=combined \ --mysql-csv-include=orders,users \ --disks=s3
MySQL incremental options:
--incremental-type=binlog|updated_atType of incremental (default: binlog)--mysql-csv-include=CSV of tables to include (for updated_at type)--mysql-csv-exclude=CSV of tables to exclude (for updated_at type)
MySQL/MariaDB incremental prerequisites (binlog only)
my.cnf:
server_id=1
log_bin=/var/log/mysql/mysql-bin
binlog_format=ROW
Grant the backup user REPLICATION CLIENT so SHOW MASTER STATUS / SHOW BINARY LOGS work.
Ensure mysqlbinlog is installed on the host running the command.
Incremental Export Formats: CSV vs SQL
When using --mode=incremental with the updated_at strategy (PostgreSQL or MySQL), you can choose between CSV and SQL output formats:
CSV Format (default)
Pros:
- Smaller file size
- Faster to generate
- Easy to import into staging tables
- Works well with data analysis tools
Cons:
- Requires manual UPSERT logic to merge into target tables
- Doesn't include SQL structure or metadata
- Need to handle primary key conflicts manually
Use when:
- Loading into staging tables for manual review
- Exporting data for analysis or ETL pipelines
- File size and speed are priorities
SQL Format
Pros:
- Ready-to-execute INSERT statements
- Can be applied directly with
mysqlorpsql - Includes proper quoting and escaping
- Self-documenting (shows table structure in comments)
Cons:
- Larger file size (SQL overhead)
- Slower to generate
- May have duplicate key conflicts if reapplied
Use when:
- Direct application to target database
- Need human-readable restore scripts
- Automation/replication scenarios
Output Modes
Separate (default):
- One file per table:
schema_table1.sql,schema_table2.sql - Easier to selectively restore individual tables
- Better for large datasets (parallel processing)
Combined:
- Single file with all tables:
database_my_incremental_20251124.sql - Simpler to manage (one file to track)
- Wrapped in transaction (BEGIN/COMMIT)
Uploads: Per-Disk Remote Paths
Set via --remote-map (preferred) or config('dbbackup.upload.remote_map').
- Disk in the map → use its mapped base path.
- Disk not in the map → use
--remote-path(orconfig('dbbackup.upload.remote_path')). - Path value
""(empty) → upload to bucket root.
Examples
- JSON:
--remote-map='{"s3":"backups/prod","wasabi":""}' - CSV :
--remote-map=s3:backups/prod,wasabi:
Incremental Backups & State
DbBackupman supports different incremental strategies with different state management:
MySQL/MariaDB Binlog (stateful)
Uses {conn}_mysql_state.json → {"file":"mysql-bin.000123","pos":45678}
State is automatically saved and used to resume from the last position.
PostgreSQL/MySQL updated_at (date-range)
When using --from-date and --to-date, backups are based on explicit date ranges and don't use state files. This is ideal for:
- One-time exports of historical data
- Backfilling specific time periods
- Parallel exports of different date ranges
When using the default behavior (no date parameters), PostgreSQL saves: {conn}_pgsql_state.json → {"since_utc":"2025-09-30T10:00:00Z"}
Where is state stored?
- If
--state-diskis provided (or defaults to the first upload disk), state is stored on that disk, under--state-path(default:{diskBase}/_state, or_stateif base is empty). - If no disks are used, state is stored locally at
storage/app/db-backups/_state.
Examples
- Disk base
backups/prod→backups/prod/_state/{conn}_mysql_state.json - Disk base
""(root) →_state/{conn}_mysql_state.json
Date Range Examples
# Export last month's data php artisan db:backup \ --connection=pgsql \ --mode=incremental \ --from-date="2025-10-01" \ --to-date="2025-10-31 23:59:59" \ --incremental-format=sql \ --pg-csv-include=public.* # Export today's changes only php artisan db:backup \ --connection=mysql \ --mode=incremental \ --incremental-type=updated_at \ --from-date="2025-11-24 00:00:00" \ --incremental-format=csv \ --mysql-csv-include=orders,inventory
Restoring Backups
PostgreSQL (full/schema)
Files end in .dump (custom format). Restore with pg_restore:
# create DB if needed createdb -h HOST -U USER TARGET_DB # restore pg_restore -h HOST -U USER -d TARGET_DB -1 app_db_pg_full_YYYYMMDD_HHMMSS.dump # if gzipped: pg_restore -h HOST -U USER -d TARGET_DB -1 <(gzcat app_db_pg_full_YYYYMMDD_HHMMSS.dump.gz)
If you used --globals, apply roles/tablespaces first:
psql -h HOST -U USER -d postgres -f globals_pg_YYYYMMDD_HHMMSS.sql
MySQL/MariaDB (full/schema)
mysql -h HOST -u USER -p DB_NAME < app_db_my_full_YYYYMMDD_HHMMSS.sql # gz: gzcat app_db_my_full_YYYYMMDD_HHMMSS.sql.gz | mysql -h HOST -u USER -p DB_NAME
Incremental
PostgreSQL CSV
Load into staging tables then upsert into targets using primary keys:
\copy staging_orders FROM 'public_orders.csv' CSV HEADER; -- MERGE / UPSERT into public.orders using ON CONFLICT INSERT INTO public.orders SELECT * FROM staging_orders ON CONFLICT (id) DO UPDATE SET ...;
Note: CSV incremental does not capture deletes — schedule periodic full backups.
PostgreSQL SQL
Apply SQL files directly:
# Separate files psql -h HOST -U USER -d DB_NAME -f public_orders.sql psql -h HOST -U USER -d DB_NAME -f public_users.sql # Combined file (gzipped) gzcat app_db_pg_incremental_20251124.sql.gz | psql -h HOST -U USER -d DB_NAME # Single file psql -h HOST -U USER -d DB_NAME < app_db_pg_incremental_20251124.sql
Note: SQL format includes INSERT statements. Handle duplicate key conflicts based on your needs.
MySQL CSV
Load into staging tables then upsert:
# Load CSV into staging mysql -h HOST -u USER -p DB_NAME -e "LOAD DATA LOCAL INFILE 'orders.csv' INTO TABLE staging_orders FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'" # Upsert into target mysql -h HOST -u USER -p DB_NAME -e "INSERT INTO orders SELECT * FROM staging_orders ON DUPLICATE KEY UPDATE ..."
MySQL SQL (updated_at)
Apply SQL files directly:
# Separate files mysql -h HOST -u USER -p DB_NAME < orders.sql mysql -h HOST -u USER -p DB_NAME < users.sql # Combined file (gzipped) gzcat app_db_my_incremental_20251124.sql.gz | mysql -h HOST -u USER -p DB_NAME
MySQL binlog SQL
Apply directly:
mysql -h HOST -u USER -p DB_NAME < app_db_my_incremental_YYYYMMDD_HHMMSS.binlog.sql
Ensure GTID/binlog settings align with your environment; treat carefully if replication is used.
Scheduling
Laravel scheduler (app/Console/Kernel.php)
protected function schedule(Schedule $schedule): void { // Nightly full (keep 7, 30 days) $schedule->command('db:backup --connection=pgsql --mode=full --gzip --disks=s3 --remote-map={"s3":"backups/prod"} --retention-keep=7 --retention-days=30') ->dailyAt('01:30')->withoutOverlapping()->onOneServer(); // Hourly incremental (PG) $schedule->command('db:backup --connection=pgsql --mode=incremental --pg-csv-include=public.orders,public.users --disks=s3 --remote-path=backups/pg/incr') ->hourly()->withoutOverlapping()->onOneServer(); }
Using the Facade
You can invoke backups from code with a simple Facade:
use Tekkenking\Dbbackupman\Facades\DbBackupman; // Full DbBackupman::run([ 'connection' => 'pgsql', 'mode' => 'full', 'gzip' => true, 'disks' => 's3', 'remote-map' => '{"s3":"backups/prod"}', 'retention-keep' => 7, 'retention-days' => 30, ]); // MySQL incremental with state on another disk DbBackupman::run([ 'connection' => 'mysql', 'mode' => 'incremental', 'disks' => 's3', 'remote-path' => 'backups/mysql/incr', 'state-disk' => 'wasabi', 'state-path' => 'backup-state', ]);
Options mirror CLI flags without the leading
--. Booleans: passtrueto include the flag (e.g.,'gzip' => true).
Optional global alias (so you can call DbBackupman::run() without an import):
// app/Providers/AppServiceProvider.php use Illuminate\Foundation\AliasLoader; public function boot(): void { AliasLoader::getInstance()->alias( 'DbBackupman', \Tekkenking\Dbbackupman\Facades\DbBackupman::class ); }
Testing the Package
This repo is set up for Orchestra Testbench.
composer install vendor/bin/phpunit
If you see version conflicts with Laravel/Testbench/PHPUnit, align versions (e.g., Testbench 10 for Laravel 12). A CI matrix can test PHP 8.1–8.3 × Laravel 10/11/12.
Troubleshooting
-
“Required tool not found…” Install the DB client tools your mode needs (
pg_dump,psql,pg_dumpall,mysqldump,mysqlbinlog). -
MySQL incremental says binary logs disabled Verify
log_binandbinlog_format=ROWinmy.cnf. Ensure your user hasREPLICATION CLIENT. -
Uploads land in the wrong folder Check
--remote-mapvs--remote-path. Empty path ("") means bucket root. -
Retention didn’t delete old sets Retention scans manifest filenames locally and per disk. Ensure manifests are present.
-
PostgreSQL incremental CSVs are empty No rows matched
updated_at > since. Validate--since/state and table patterns.
Version Compatibility
| Laravel | PHP | Testbench | PHPUnit |
|---|---|---|---|
| 10.x | ≥8.1 | ^8.0 | ^10.5 |
| 11.x | ≥8.2 | ^9.0 | ^10.5 |
| 12.x | ≥8.2 | ^10.0 | ^11.x |
Security Notes
- Secrets on CLI: MySQL tools receive
--password=..., which may be visible to local process lists. Run on trusted hosts. (Postgres usesPGPASSWORDenv forpg_dump/psql.) - Lock down
storage/app/db-backupspermissions. - Use least-privilege DB accounts suitable for backups.
Contributing
PRs and issues welcome! Please run tests locally:
composer test # or vendor/bin/phpunit
If you’re adding behavior, include/adjust tests where appropriate.
License
MIT. See LICENSE.