This project includes two types of performance testing:
- K6 Testing - For API endpoint load testing
- Pytest-Benchmark - For backend component performance benchmarking
K6 is used for performance and load testing of API endpoints.
Run tests using Docker Compose:
docker-compose -f docker-compose.k6.yml upThe Docker setup handles all K6 dependencies automatically.
- Test files: Place your K6 test scripts in
./k6/tests/ - Results: Test outputs are saved to
./k6/results/ - Dataset: Test data should be placed in
./k6/data. See./k6/data/README.mdfor initialization instructions.
By default, the docker compose command will run all .js files in ./k6/tests/, but you can send more arguments to the docker command (in the docker-compose file or when using docker run) to specify the file that you want to run.
pytest-benchmark is used for performance benchmarking of backend components such as the ingester.
-
Test Database: You need a running test database. Start it with:
docker compose -f docker-compose.test.yml up test_db -d
The easiest way to run performance tests locally is using the provided script:
cd backend
./run_perf_tests.shThis script:
- Sets the correct database connection parameters
- Runs pytest with the
performancemarker - Disables parallel execution (required for accurate benchmarks with pytest-benchmark)
You can also run performance tests manually with more control:
cd backend
export DJANGO_SETTINGS_MODULE=kernelCI.perf_test_settings
export TEST_DB_HOST=localhost
export TEST_DB_PORT=5435
poetry run pytest -m performance --use-local-db -n 0To run a specific performance test file:
cd backend
./run_perf_tests.sh kernelCI_app/tests/performanceTests/test_ingest_perf.pyPerformance tests generate detailed statistics including:
- Mean execution time: Average time taken across all rounds
- Standard deviation: Variability in execution times
- Min/Max times: Fastest and slowest execution times
Example output:
test_ingest_perf
Mean: 8.23s
Std Dev: 0.45s
Min: 7.89s
Max: 9.01s
pytest-benchmark automatically saves detailed benchmark results to a .benchmarks folder in JSON format. This folder contains comprehensive performance data including:
- Historical benchmark runs
- Detailed timing statistics for each test
- Comparison data between runs
- Additional metadata about the test execution environment
- Performance thresholds: Tests assert mean execution time < 10s. This may vary on different hardware or CI environments.
- Database setup: Uses dedicated
perf_test_settings.pywhich enables migrations for proper schema setup. - Parallel execution: Must run with
-n 0(no parallel execution) for accurate benchmarking and avoid conflicts with pytest-benchmark. - File cleanup: Tests automatically backup and restore submission files between rounds.