Free Z-Score Calculator with Normal Distribution Graphs

All-in-one statistics tool: compute Z-scores, percentile ranks, CDF probabilities, inverse Z, confidence intervals, hypothesis Z-tests, sample sizes, and browse the full interactive Z-table. Every result is visualized on a shaded normal distribution curve with step-by-step working.

Z-Score & Percentile PDF / CDF Inverse Z Confidence Intervals Hypothesis Z-Test Z-Table Lookup Sample Size

Z-Score Calculator

Enter a value, population mean, and standard deviation to get the Z-score, percentile rank, and shaded probability area. Highlighted fields are your primary inputs — change these to recalculate.

The data point to evaluate
Population or group mean
Must be greater than 0
Quick presets:
Original Distribution (μ, σ)
Standard Normal Z-Distribution

Batch Z-Score Calculator

Paste or type a list of values below (comma or newline separated). Z-scores are computed from the dataset mean and standard deviation automatically, or supply custom μ and σ. Highlighted fields are your primary inputs.

Your list of numeric values
#Value (x)Z-ScorePercentileP(X≤x)Classification

Inverse Z-Score: Percentile to Value

Find the Z-score and raw value for any percentile, or enter a Z directly to find its percentile. Change the highlighted percentile field to recalculate.

Primary input (0.01 to 99.99)
For raw value output
For raw value output
Common Critical Z Values (click to load)

Confidence Interval Calculator

Compute a Z-based confidence interval for a population mean. All four standard confidence levels are shown automatically. Highlighted fields are your primary inputs.

Your sample mean
Population or sample SD
Number of observations

Z-Test Calculator (One-Sample & Two-Sample)

Test whether a sample mean differs significantly from a known population mean, or compare two sample means. Outputs test statistic, p-value, effect size, and a clear reject/fail-to-reject decision. Highlighted fields are your primary inputs.

Your sample mean
Null hypothesis value
Known population SD
Number of observations

Sample Size Calculator

Find the minimum sample size needed for a given margin of error and confidence level, or for a hypothesis test with a specified power. Highlighted fields are your primary inputs.

Known or estimated SD
Acceptable +/- range

Interactive Z-Table (Standard Normal CDF)

Click any cell to see P(Z ≤ z) for that Z-value. The table covers Z from 0.00 to 3.49 in steps of 0.01. For negative Z, use P(Z ≤ -z) = 1 - P(Z ≤ z). Type a Z value in the search field to jump directly to it.

Type any Z to highlight

Empirical Rule Visualizer

Enter your distribution parameters to see exact value ranges for each standard deviation band. Change the highlighted mean and SD to update the chart instantly.

Distribution center
Must be greater than 0

Theory, Formulas & Derivations

The Z-score (standard score) is a dimensionless measure expressing how many standard deviations a data point is from the population mean. It is the foundation of normal-distribution-based inference: percentiles, confidence intervals, hypothesis tests, and power analysis all rely on Z-scores or the standard normal distribution they define.

1. The Z-Score Formula and Derivation

Any normally distributed random variable X ~ N(μ, σ²) can be transformed to the standard normal Z ~ N(0, 1) by subtracting the mean and dividing by the standard deviation:

Z = (x − μ) / σ
Z = standard score | x = observed value | μ = population mean | σ = population standard deviation

Derivation: If X ~ N(μ, σ²) then E[Z] = (E[X] − μ)/σ = 0 and Var(Z) = Var(X)/σ² = 1, confirming Z ~ N(0, 1). For a sample when σ is unknown, use the sample SD s; the result follows a t-distribution with (n − 1) degrees of freedom.

2. Probability Density Function (PDF)
f(x) = (1 / (σ√(2π))) × exp(−(x−μ)² / (2σ²))
For the standard normal N(0,1): ϕ(z) = (1/√(2π)) × exp(−z²/2)  |  peak ϕ(0) = 0.3989
3. Cumulative Distribution Function (CDF) and the Error Function
Φ(z) = P(Z ≤ z) = (1/2) × [1 + erf(z/√2)]
erf(x) = (2/√π) ∫₀ˆx exp(−t²) dt  |  Φ(0) = 0.5  |  Φ(1.645) ≈ 0.95  |  Φ(1.96) ≈ 0.975
4. Key Probability Calculations
ProbabilityFormulaWhen to Use
Left tail P(X ≤ x)Φ(z)Percentile rank, CDF lookup
Right tail P(X ≥ x)1 − Φ(z)Exceedance probability
Between P(a ≤ X ≤ b)Φ(zₛ) − Φ(zₘ)Interval probability
Two-tail P(|Z| ≥ z)2(1 − Φ(|z|))Two-tailed hypothesis test p-value
Two-tail P(|Z| ≤ z)2Φ(|z|) − 1Symmetric confidence region
5. The Empirical Rule (68-95-99.7 Rule)
RangeZ-Score Band% WithinPractical Meaning
μ ± 1σ−1 to +168.27%Majority of values; within 1 SD is considered "normal"
μ ± 2σ−2 to +295.45%Nearly all values; ≥2 SD is statistically notable
μ ± 3σ−3 to +399.73%Virtually all values; beyond 3 SD triggers investigation
μ ± 1.645σ−1.645 to +1.64590.0%90% confidence region; z* for 90% CI
μ ± 1.96σ−1.96 to +1.9695.0%95% confidence region; z* for 95% CI
μ ± 2.576σ−2.576 to +2.57699.0%99% confidence region; z* for 99% CI
6. Confidence Interval Derivation
CI = x̄ ± z* × (σ / √n)
x̄ = sample mean | z* = critical Z | σ/√n = standard error (SE) | ME = z* × SE

Key insight: The margin of error shrinks with larger n (∝ 1/√n) and grows with higher confidence (larger z*). A 95% CI does not mean 95% probability for this specific interval; it means 95% of all such intervals contain μ.

7. Z-Test for a Population Mean
Zₛₗₘₙ = (x̄ − μ₀) / (σ / √n)
H₀: μ = μ₀ | Reject H₀ if |Z| > zα/2 (two-tailed) or Z > zα (right-tailed)
8. Two-Sample Z-Test
Z = (x̄₁ − x̄₂) / √(σ₁²/n₁ + σ₂²/n₂)
Tests H₀: μ₁ = μ₂ against H₁: μ₁ ≠ μ₂ (or directional alternatives)
9. Sample Size Formulas
For CI: n = (z* × σ / ME)²
Round up to nearest integer | ME = desired margin of error
For hypothesis test power: n = ((zα/2 + zβ) × σ / δ)²
δ = |μ₁ − μ₀| minimum detectable effect | β = Type II error | 1−β = desired power
10. Effect Size: Cohen's d
d = (μ₁ − μ₂) / σpooled
Small: d ≈ 0.2 | Medium: d ≈ 0.5 | Large: d ≥ 0.8 (Cohen, 1988)
11. Interpreting Z-Score Magnitude
|Z| ValueInterpretationPercentile (left tail)Rarity
0.00Exactly at the mean50thMost common
0.67Slightly above/below average75th / 25thCommon (1 in 2)
1.001 SD from mean84th / 16thModerately unusual
1.64590th percentile boundary90th / 10th1 in 10
1.9695% CI boundary97.5th / 2.5th1 in 20
2.57699% CI boundary99.5th / 0.5th1 in 200
3.00Extreme outlier (3 SD)99.87th / 0.13th1 in 370
4.00Very rare (4 SD)99.997th1 in 15,787
6.00Six Sigma quality target~99.9999966%3.4 per million
12. Chebyshev's Inequality (Non-Normal Data)
P(|X − μ| ≤ kσ) ≥ 1 − 1/k²
Applies to ANY distribution | k=2: at least 75% within ±2σ | k=3: at least 88.9% within ±3σ
13. Z-Scores in Real-World Applications
Applicationx representsμσKey rule
Standardised Testing (SAT)Raw score500 per section100Percentile ranking of students
IQ TestsIQ score10015Z > 2 = top 2.3% (gifted)
Quality Control (SPC)MeasurementTarget specProcess σ|Z| > 3 triggers investigation
Six Sigma ManufacturingDefect distanceTargetProcess σZ = 6 = 3.4 DPMO
Finance (Altman Z-Score)Financial ratiosVariesVariesZ < 1.81 = distress zone
Medical Reference RangesLab valuePopulation meanPopulation σ|Z| > 2 = clinically notable
Child Growth Charts (WHO)Height / weightAge-sex specificAge-sex specificZ < −2 = stunting; < −3 = severe
Machine LearningFeature valueFeature meanFeature σStandardise before PCA, SVM, k-means

Frequently Asked Questions

1. What is a Z-score and what does it measure?

A Z-score (standard score) measures how many standard deviations a particular value is above or below the mean of a distribution. Calculated as Z = (x - mu)/sigma, it transforms any normally distributed variable into the standard normal distribution (mu=0, sigma=1). A Z of +2 means the value is 2 standard deviations above the mean; Z = -1.5 means 1.5 SDs below. This standardisation allows comparison of values from different distributions with different means and units.

2. How do I convert a Z-score to a percentile?

A percentile tells you what percentage of the population falls below a given value. To convert Z to percentile, calculate the CDF: P = Phi(Z) = (1/2)[1 + erf(Z/sqrt(2))]. Common conversions: Z=0 is the 50th percentile; Z=1 is the 84th; Z=1.645 is the 95th; Z=1.96 is the 97.5th; Z=2.576 is the 99.5th; Z=-1 is the 16th. This is exactly what a Z-table looks up.

3. What is the difference between left-tail, right-tail, and two-tail probabilities?

Left-tail P(X <= x) = Phi(Z) is the area to the LEFT of z on the bell curve. Right-tail P(X >= x) = 1 - Phi(Z) is the area to the right. Two-tail P(|Z| >= z) = 2(1 - Phi(|z|)) is the total area in both tails. For hypothesis testing, two-tail tests use alpha/2 in each tail; one-tail tests use the full alpha in one tail. Z = 1.96 gives a 5% two-tail significance level.

4. What is the inverse Z-score (percentile to Z)?

The inverse Z (quantile function) finds the Z-value for a given cumulative probability. For example: the 90th percentile is Z = 1.282; the 97.5th percentile is Z = 1.960; the 99th percentile is Z = 2.326. This is used to find z* for confidence intervals and the critical z-value for a given alpha in hypothesis testing. Computed as Z = sqrt(2) * erf_inverse(2p - 1).

5. What is a confidence interval and how does Z relate to it?

A confidence interval (CI) is a range of values that contains the true population mean with a specified probability. The formula is x-bar +/- z* * (sigma/sqrt(n)). The z* is the critical Z value: 1.645 for 90%, 1.960 for 95%, 2.576 for 99%. The margin of error ME = z* * SE where SE = sigma/sqrt(n). A wider CI gives more confidence but less precision. A 95% CI does not mean 95% probability that this specific interval contains mu; rather, 95% of intervals built this way contain the true mean.

6. What is a Z-test and when should I use it?

A one-sample Z-test checks whether a sample mean differs significantly from a known population mean, using Z = (x-bar - mu0) / (sigma/sqrt(n)). Use it when the population standard deviation sigma is known and n > 30 (or data are normal). A two-sample Z-test compares means from two independent groups. If sigma is unknown and n is small, use a t-test instead. The p-value is derived from the standard normal CDF.

7. What is a normal distribution and when does data follow it?

The normal (Gaussian) distribution is a symmetric bell-shaped distribution defined by its mean mu and standard deviation sigma. Many natural phenomena approximate normality: heights, test scores, measurement errors, blood pressure. The Central Limit Theorem guarantees that sample means approach normality for large n (n > 30), even when individual observations are skewed, which is why Z-score methods work broadly.

8. What Z-score is considered an outlier?

Common thresholds: |Z| > 2 for a moderate outlier, |Z| > 3 for an extreme outlier. Values with |Z| > 3 occur only 0.27% of the time in a normal distribution (about 3 in 1,000). In Six Sigma quality control, processes target +/-6 sigma from the specification target, allowing only 3.4 defects per million opportunities. The appropriate threshold depends on context.

9. What is the difference between standard deviation and standard error?

Standard deviation (sigma or s) measures the spread of individual data values around the mean. Standard error (SE = sigma/sqrt(n)) measures how much the sample mean varies across different samples. SE is always smaller than sigma for n > 1 and shrinks as 1/sqrt(n) as sample size increases, which is why larger samples give more precise estimates of the population mean.

10. What is sample size and how is it calculated?

For a confidence interval with desired margin of error ME: n = (z* * sigma / ME) squared, rounded up. For a hypothesis test with desired power (1 - beta) and minimum detectable effect delta: n = ((z_alpha/2 + z_beta) * sigma / delta) squared. Larger n gives narrower CIs and higher power at increased cost.

11. Can Z-scores be used for non-normal distributions?

Z-scores can be calculated for any distribution, but the probability interpretations from the Z-table only apply precisely to normal distributions. For non-normal data, Chebyshev's inequality gives a distribution-free lower bound: at least (1 - 1/k^2) * 100% of data lies within +/-k sigma. For large samples (n > 30), the Central Limit Theorem ensures the sample mean is approximately normal, so Z-based confidence intervals remain valid.

12. What is the difference between Z-score and t-score?

Both Z and t-scores measure distance from the mean in standard deviation units. Z-scores are used when the population sigma is known or n > 30. T-scores are used when sigma is unknown and estimated by s; the t-distribution has heavier tails than the normal, especially for small n. As n increases, the t-distribution converges to the standard normal. Rule of thumb: use Z when sigma is known or n > 30; use t when sigma is unknown and n <= 30.

13. How is Z-score used in quality control and Six Sigma?

Process Z (sigma level) measures capability: a Six Sigma process has +/-6 sigma between the mean and specification limits, corresponding to 3.4 DPMO. Process capability indices Cp = (USL - LSL)/(6 sigma) and Cpk indicate how well the process fits within spec limits. Control charts use +/-3 sigma limits as standard control limits; points beyond these trigger investigation.

14. How do I standardise a dataset for machine learning?

Z-score normalisation: for each value x, compute Z = (x - mu) / sigma. The result has mean = 0 and std = 1. This is essential before PCA, k-means clustering, support vector machines, or any distance-based algorithm where features with different scales would otherwise dominate. Min-max normalisation is a different technique that maps values to [0, 1].

15. What is the Central Limit Theorem and why does it matter?

The CLT states that the sampling distribution of the mean approaches N(mu, sigma^2/n) as n increases, regardless of the population shape. For n >= 30, x-bar is approximately normal even if individual observations are skewed. This is why Z-based confidence intervals and hypothesis tests are valid for large samples from any distribution, not just normally distributed populations.

Explore More Engineering Tools

Water budget, sieve analysis, chi-square tests, simultaneous equations and many more free tools.

All Tools Blog