Tune Mode¶
Tune mode (--stochastic-tune) empirically profiles your stochastic tests to discover distributional parameters (primarily variance), then persists them to .stochastic.toml. On subsequent runs, the framework uses these discovered parameters to select tighter bounds and require fewer samples.
Quick Start¶
How It Works¶
- Profiling: Each
@stochastic_testfunction is called 50,000 times (configurable) to collect samples - Variance UCB: A rigorous upper confidence bound on the true variance is computed using the chi-squared distribution
- Persistence: Results are written to
.stochastic.tomlin the project root - Automatic loading: On subsequent runs, the decorator loads tuned parameters and adds
variance_tunedto the declared properties, enabling thebernstein_tunedbound
The Variance UCB¶
The upper confidence bound ensures the discovered variance is conservative (larger than the true variance with high probability):
where \(s^2\) is the sample variance and \(\chi^2_{\alpha}(n-1)\) is the \(\alpha\)-quantile of the chi-squared distribution. The default confidence level is \(\alpha = 10^{-4}\), meaning the UCB is valid with probability at least \(1 - 10^{-4}\).
CLI Options¶
--stochastic-tune¶
Enable tune mode. Stochastic tests are profiled instead of being run normally.
--stochastic-tune-samples¶
Number of samples per test during tuning. Default: 50,000.
More samples produce a tighter variance UCB but take longer.
The .stochastic.toml File¶
After tuning, the file looks like:
# Auto-generated by pytest-stochastic --stochastic-tune
[tests."tests.test_example.test_uniform_mean"]
n_tune_samples = 50000
observed_range = [0.000012, 0.999987]
tuned_at = "2026-02-22T15:30:00+00:00"
variance = 0.08345
Each entry records:
| Field | Description |
|---|---|
variance |
Upper confidence bound on the true variance |
observed_range |
[min, max] of observed samples |
n_tune_samples |
Number of samples used for tuning |
tuned_at |
ISO 8601 timestamp of when tuning was run |
Merging Behavior¶
Re-running --stochastic-tune merges results with existing data. Tests not included in the current run keep their previous tuned parameters.
How Tuned Parameters Improve Tests¶
Consider a test with bounds=(0, 1) and no declared variance:
- Without tuning: Hoeffding's bound is used. For
atol=0.05andfailure_prob=1e-8, this requires \(n = 7{,}378\). - With tuning: If the true variance is \(\approx 0.083\), the tuned Bernstein bound might require only \(n \approx 2{,}500\) — a 66% reduction.
The key: tuning adds variance_tuned to the property set, enabling the bernstein_tuned bound which combines the declared bounds with the machine-discovered variance.
Workflow¶
Initial Setup¶
# 1. Write tests with bounds
# 2. Run tuning to discover variance
pytest --stochastic-tune
# 3. Commit .stochastic.toml
git add .stochastic.toml
git commit -m "Add tuned stochastic test parameters"
Periodic Re-tuning¶
Re-tune when:
- You change the test function's implementation
- You update the distribution being tested
- You want tighter bounds after a code change
pytest --stochastic-tune
git diff .stochastic.toml # Review changes
git add .stochastic.toml && git commit -m "Re-tune stochastic tests"
CI Integration¶
Run tuning locally or in a dedicated CI job, then commit the results. Normal CI runs use the committed .stochastic.toml automatically.
Test Key Matching¶
Tuned parameters are matched to tests by key. The tune mode stores keys derived from the pytest node ID (e.g., tests.test_example.test_uniform_mean). At load time, the framework tries several candidate key formats based on the function's module and qualified name to find a match.