Skip to content

Benchmarking

sql-splitter includes a Docker-based benchmarking suite to compare performance against other SQL dump splitting tools.

ToolLanguageStarsNotes
sql-splitterRust-Multi-dialect, streaming I/O
mysqldbsplitPHP101Fastest tool! Selective extraction
mysqldump-splitterRust1Hierarchical output, gzip support
mysql-dump-splitterGo0Include/exclude tables
mysqldumpsplitGo~40Buffers in memory, has deadlock bug*
mysqldumpsplitterBash/awk540+Most popular shell-based tool
mysql_splitdumpBash/csplit93Uses GNU coreutils csplit
mysqldumpsplitNode.js55Requires Node 10 (gulp 3.x)
mysql-dump-splitRuby77Archived project
extract-mysql-dumpPython~5Multi-database extraction, Python 3.3+

*Original Go tool has a deadlock bug with non-interleaved dumps; benchmarks use a patched fork.

Benchmarks run inside Docker for reproducibility across different machines.

Terminal window
# Build the Docker image (first time only)
make docker-build
# Run benchmark with generated 100MB test file
make docker-bench
# Run with custom size (e.g., 200MB)
./docker/run-benchmark.sh --generate 200
FlagDescription
--generate SIZEGenerate test data of SIZE MB
--runs NNumber of benchmark runs (default: 3)
--warmup NWarmup runs before timing (default: 1)
--export FILEExport results to markdown file
--listShow installed tools
--testTest which tools work with file
Terminal window
# Generate 500MB test and run 5 iterations
./docker/run-benchmark.sh --generate 500 --runs 5
# Export results to markdown
./docker/run-benchmark.sh --generate 100 --export results.md
# List available tools
./docker/run-benchmark.sh --list
# Test which tools work with a specific file
./docker/run-benchmark.sh --test /data/dump.sql

Hardware: Apple M2 Max, 32GB RAM, Docker Desktop (linux/arm64)

ToolMeanσThroughputRelative
mysqldbsplit (PHP)83 ms±41238 MB/s1.00 (fastest)
mysql-dump-splitter (Go/Bekkema)102 ms±21010 MB/s1.23x slower
mysqldump-splitter (Rust/Scoopit)124 ms±16835 MB/s1.48x slower
mysqldumpsplit (Go)*155 ms±16666 MB/s1.86x slower
sql-splitter (Rust)226 ms±9457 MB/s2.71x slower
mysql_splitdump (csplit)231 ms±28447 MB/s2.77x slower
mysqldumpsplit (Node.js)450 ms±29230 MB/s5.39x slower
mysql-dump-split (Ruby)970 ms±14106 MB/s11.6x slower
mysqldumpsplitter (Bash)1049 ms±14298 MB/s12.6x slower
extract-mysql-dump (Python)1395 ms±1574 MB/s16.7x slower
ToolMeanσThroughputRelative
mysqldumpsplit (Go)*1.29s±0.02802 MB/s1.00 (fastest)
sql-splitter (Rust)1.84s±0.07563 MB/s1.42x slower
mysql_splitdump (csplit)1.85s±0.02558 MB/s1.44x slower
mysqldumpsplit (Node.js)2.72s±0.01381 MB/s2.11x slower
mysqldumpsplitter (Bash)8.81s±0.02118 MB/s6.82x slower
mysql-dump-split (Ruby)9.05s±0.31114 MB/s7.01x slower
ToolTimeThroughputRelative
sql-splitter (Rust)18.4s283 MB/s1.00 (fastest)
mysqldumpsplit (Go)*27.1s191 MB/s1.47x slower
mysqldumpsplit (Node.js)28.7s181 MB/s1.56x slower
mysqldumpsplitter (Bash)55.5s94 MB/s3.02x slower
mysql_splitdump (csplit)82.5s63 MB/s4.48x slower
mysql-dump-split (Ruby)103s50 MB/s5.60x slower

At 5GB, sql-splitter becomes the fastest tool because the Go tool’s memory-buffering strategy causes significant slowdown under memory pressure.

  • mysqldbsplit (PHP) is the fastest at 1.2+ GB/s throughput—surprisingly beating all compiled tools on mysqldump format.
  • New Go/Rust competitors (Bekkema, Scoopit) are also faster than our sql-splitter on standard mysqldump format.
  • sql-splitter (Rust) uses streaming I/O with fixed ~10-15MB memory regardless of file size. Slower on small files, but consistent on large files.
  • csplit is surprisingly fast for a shell tool, but relies on GNU coreutils (not available on stock macOS).
  • Node.js is ~5x slower than PHP but still reasonable for JS-based workflows.
  • Ruby/Bash/awk are 11-13x slower—fine for one-off use but not for automation.
  • Python (extract-mysql-dump) is the slowest at ~17x slower, designed specifically for multi-database extraction scenarios.

All competitors only work with standard mysqldump format that includes comment markers like:

-- Table structure for table `users`

sql-splitter parses actual SQL statements (CREATE TABLE, INSERT INTO, COPY), so it works with:

  • TablePlus exports
  • DBeaver exports
  • pg_dump (PostgreSQL)
  • sqlite3 .dump
  • Any valid SQL file

Competitors produce 0 tables on non-mysqldump files.

ToolIssue
mysqldbsplit (PHP)Requires PHP CLI; mysqldump format only
mysqldump-splitter (Scoopit)Rust; mysqldump format only; hierarchical output structure
mysql-dump-splitter (Bekkema)Go; mysqldump format only
Go (afrase)Deadlocks on files where all INSERTs for one table come before the next table
Node.js (vekexasia)Requires Node 10 (gulp 3.x incompatible with Node 12+)
RubyProject archived, unmaintained
Bash/awkSlow, Unix-only
csplitRequires GNU coreutils
extract-mysql-dumpDesigned for multi-database dumps; no absolute paths; slowest of all tools
sql-splitterSlower than specialized tools on mysqldump; faster on non-standard SQL
  1. Non-mysqldump formats (TablePlus, DBeaver, pg_dump, sqlite)
  2. Large files (>1GB) where memory matters
  3. CI/CD pipelines needing consistent behavior
  4. Multi-dialect projects (MySQL + PostgreSQL + SQLite + MSSQL)

The benchmark infrastructure lives in the docker/ directory:

FilePurpose
Dockerfile.benchmarkContainer with all tools installed
docker-compose.benchmark.ymlCompose configuration
run-benchmark.shEntry point script
benchmark-runner.shCore benchmarking logic
test-competitors.shTest tool compatibility