Simultaneous Equation Solver (2–10 Variables)

Solve systems of linear equations by four methods: matrix inverse, Gaussian elimination, Cramer's rule, and LU decomposition. Get full step-by-step workings, matrix properties (determinant, rank, condition number, eigenvalues), solution verification, 2D/3D visualisation, sensitivity analysis, and LaTeX/CSV export.

2–10 Variables 4 Solution Methods Eigenvalues LU Decomposition Sensitivity Analysis 2D / 3D Graph

Enter System of Equations

Enter coefficients for each equation. The constant (right-hand side) is highlighted in blue. Custom variable names are editable.

2
Quick Examples
Variable names:

Matrix Properties & Eigenvalues

Solve a system first. Properties update automatically after each solve.

Solve a system to see matrix properties here.

Step-by-Step Solution

Solve a system to see step-by-step workings here.

Graph (2-variable: Lines | 3-variable: Planes)

For 2-variable systems each equation is a line; the red dot is the solution. For 3-variable systems, each equation is a plane (projected as lines through the solution point).

Sensitivity Analysis

After solving, perturb any coefficient or constant using the sliders and see how the solution changes. This reveals which equations and coefficients most affect the answer.

Solve a system to enable sensitivity analysis.

Solution History (Last 5)

Click any previous solution to reload it into the solver.

No solutions yet. Solve a system to see history.

Theory & Solution Methods

A system of linear simultaneous equations consists of n equations and n unknowns where all variables appear only with exponent 1. In matrix form:

A · x = b
A = n×n coefficient matrix | x = unknown vector | b = constant vector
Method 1 — Matrix Inverse

If det(A) ≠ 0, A is invertible and the unique solution is x = A⁻¹b. The inverse is computed as:

A-1 = adj(A) / det(A)
adj(A) = transpose of the cofactor matrix | Cij = (-1)i+j × Mij (minor of element i,j)
Method 2 — Gaussian Elimination with Partial Pivoting

The augmented matrix [A|b] is reduced to upper triangular form using three elementary row operations: row swap, row scaling, and row addition. Partial pivoting selects the row with the largest absolute pivot element to minimise rounding error. Back-substitution then recovers x from the triangular system:

xn = bn / unn

xi = (b'i − ∑j>i uij xj) / uii    (i = n−1, n−2, ..., 1)

where uij are elements of upper triangular matrix U
Method 3 — Cramer's Rule
xi = det(Ai) / det(A)
Ai = matrix A with column i replaced by vector b | works exactly for n = 2 or 3

Elegant for n = 2 or 3. Requires n+1 determinant evaluations; impractical for large n.

Method 4 — LU Decomposition (Doolittle)

Factorises A = L·U where L is unit lower triangular (diagonal = 1) and U is upper triangular:

A = L · U

Forward substitution (L · y = b):
yi = bi − ∑j<i lij yj

Back substitution (U · x = y):
xi = (yi − ∑j>i uij xj) / uii

Advantage: reuse L and U for multiple right-hand sides b without re-factorising
Determinant and Solution Type
det(A)Rank conditionSolution typeGeometric (2D)
≠ 0rank(A) = nUnique solutionLines cross at one point
= 0rank(A) = rank([A|b]) < nInfinite solutionsLines are coincident
= 0rank(A) < rank([A|b])No solutionLines are parallel
Eigenvalues and Eigenvectors

The eigenvalues λ of A satisfy det(A − λI) = 0 (the characteristic equation). Each eigenvalue has a corresponding eigenvector v where Av = λv. Eigenvalues reveal:

Eigenvalue propertyWhat it means
All λ > 0A is positive definite (common in structural stiffness matrices)
Any λ = 0A is singular (det = 0), no unique solution
Mixed sign λA is indefinite (saddle point in optimisation)
λ1 = max, λn = minCondition number κ = |λ1 / λn| (spectral definition)
Condition Number and Numerical Stability
κ(A) = ∥A∥ · ∥A-1∥   (matrix norm definition)
Well-conditioned: κ ≈ 1 | Ill-conditioned: κ ≫ 1 | Singular: κ = ∞

For ill-conditioned systems (κ > 1000), floating-point errors may corrupt the last few significant digits. The sensitivity analysis in this tool reveals which coefficients cause the most instability.

Sensitivity (Perturbation Analysis)
Δx ≈ A-1 · Δb    (perturbation in b)
Δx ≈ −A-1 · (ΔA) · x    (perturbation in A)
Relative error bound: ∥Δx∥ / ∥x∥ ≤ κ(A) · ∥Δb∥ / ∥b∥

This formula explains why a high condition number amplifies errors: a 1% change in b can cause up to κ% change in x.

Engineering Applications
ApplicationSystem representsVariablesTypical size
Electrical CircuitsKVL / KCL mesh equationsBranch currents, node voltages2–10
Structural AnalysisStiffness matrix K·u = FNodal displacements3–1000+
Heat Transfer (FDM)Finite difference temperature nodesNode temperatures3–100
Fluid / Pipe NetworkHardy-Cross methodPipe flow rates, heads3–20
Economics (Leontief)Input-output production modelIndustry output levels5–50
Traffic FlowConservation at intersectionsRoad flow rates3–15
Chemistry MixturesMass/mole balancesComponent concentrations2–8
Algorithm Complexity Comparison
MethodTime ComplexityBest forNumerical stability
Matrix InverseO(n³)n ≤ 4, multiple RHSGood (with pivoting)
Gaussian Elim.O(n³/3)General purposeExcellent (partial pivot)
Cramer's RuleO((n+1)!)n = 2 or 3 onlyPoor for large n
LU DecompositionO(n³/3) + O(n²)Multiple RHS reuseExcellent (with pivoting)

Frequently Asked Questions

1. What is a system of simultaneous linear equations?

A system of simultaneous linear equations is a set of n equations, each containing n unknowns, where every term is either a constant or a constant multiplied by a variable (no powers, products, or trigonometric functions of variables). A solution is a set of values for all unknowns that satisfies all equations simultaneously. The system has a unique solution, no solution, or infinitely many solutions depending on the determinant and rank of the coefficient matrix.

2. How many variables can this tool handle?

This tool handles systems from 2 to 10 variables (equations). For 2-variable systems it plots both lines and their intersection on a 2D graph. For 3-variable systems, a projected 3D plane view is also available. All four methods (matrix inverse, Gaussian elimination, Cramer's rule, LU decomposition) work for 2 to 10 variables.

3. What is LU decomposition and when is it better than Gaussian elimination?

LU decomposition factorises the coefficient matrix A into a product L*U where L is unit lower triangular and U is upper triangular. The factorisation costs O(n^3/3) operations, same as Gaussian elimination, but the key advantage is that once L and U are computed, solving for different right-hand side vectors b requires only O(n^2) additional work each time. This makes LU decomposition significantly more efficient when the same matrix A appears with many different b vectors, such as in structural analysis with multiple load cases.

4. What is Cramer's Rule and when should I use it?

Cramer's Rule solves x_i = det(A_i)/det(A) where A_i replaces column i of A with vector b. It is mathematically elegant and easy to implement by hand for n = 2 or 3. For n >= 4, it requires n+1 full determinant evaluations, making it exponentially more expensive than Gaussian elimination. Use Cramer's rule only for 2x2 or 3x3 systems, or for deriving analytical formulas for symbolic solutions.

5. Why does the solver say the system has no unique solution?

When det(A) = 0, the coefficient matrix is singular. This means either the equations are inconsistent (contradictory, no solution) or dependent (one equation is a linear combination of others, giving infinitely many solutions). Check for duplicate or proportional equations, and verify that all equations are linearly independent.

6. What do eigenvalues tell us about the system?

Eigenvalues lambda of A satisfy det(A - lambda*I) = 0. If any eigenvalue equals zero, the matrix is singular (det = 0) and no unique solution exists. The ratio of the largest to smallest absolute eigenvalue is the spectral condition number, indicating how sensitive the solution is to numerical errors. All positive eigenvalues indicate a positive definite matrix (common in structural stiffness problems).

7. What is the condition number and why does it matter?

The condition number kappa(A) measures how much the solution x can change relative to small changes in the input (matrix A or vector b). Specifically, the relative error in x is bounded by kappa times the relative error in b. A well-conditioned system (kappa near 1) is numerically reliable. An ill-conditioned system (kappa >> 1000) may produce solutions where only a few digits are meaningful.

8. How does the sensitivity analysis work?

The sensitivity section lets you perturb any single coefficient or constant by a percentage using a slider. The tool re-solves the perturbed system and shows how each solution variable changes. This directly demonstrates the perturbation bound: the fractional change in x is bounded by kappa(A) times the fractional change in the perturbed element. Use this to identify which inputs most affect the solution.

9. Can I use custom variable names?

Yes. The variable name editor above the equation grid lets you rename variables from the defaults (x, y, z...) to anything you need: I1, I2, I3 for circuit currents; u1, u2 for displacements; T1, T2 for temperatures. Names appear throughout the solution display and in the LaTeX and CSV exports.

10. What does the LaTeX export produce?

The LaTeX copy button produces a properly formatted aligned equation environment and solution in LaTeX syntax, ready to paste directly into a LaTeX document, Overleaf, or a Markdown renderer that supports LaTeX (such as Jupyter notebooks or some wiki editors).

11. Can this tool handle overdetermined systems?

No. This tool requires exactly n equations for n unknowns (square system). Overdetermined systems (more equations than unknowns) require least-squares methods via the pseudo-inverse. Underdetermined systems (fewer equations) have infinitely many solutions and require additional constraints or minimum-norm methods.

12. What real-world problems use 3x3 or larger systems?

Examples include: Kirchhoff's circuit laws for mesh or nodal analysis (3-6 equations for complex circuits), truss analysis by the method of joints (2 equations per free joint), finite difference heat transfer (as many equations as temperature nodes), traffic flow conservation (one equation per intersection), pipe network Hardy-Cross method, and the Leontief input-output economic model (one equation per industry sector).

13. Why might two methods give slightly different numerical answers?

Different algorithms accumulate floating-point rounding errors in different orders. Matrix inverse and Gaussian elimination may produce results differing by amounts around 10^-12 for well-conditioned systems. These tiny differences are not mathematical errors but reflect the 64-bit floating-point representation of real numbers. The verification step shows the residual for each equation to confirm correctness.

14. What is partial pivoting and why does Gaussian elimination use it?

Without pivoting, Gaussian elimination can divide by very small pivot elements, greatly amplifying rounding errors. Partial pivoting swaps rows to always put the row with the largest absolute value in the pivot column first before elimination. This keeps all multipliers at most 1 in absolute value, which bounds error growth and makes the algorithm numerically stable for all well-conditioned systems.

15. How do I enter fractional or decimal coefficients?

Type the decimal value directly into any coefficient box (e.g., 0.5 for 1/2, or 0.333 for 1/3). The inputs accept any numeric value including negatives and decimals. For very small or large coefficients, scientific notation input is also accepted in most browsers (e.g., 1e-3 for 0.001).

Free • No sign-up • Always updated

Explore More Engineering Tools

From statistics and fluid mechanics to structural analysis and data science — all the calculators engineers and students actually need, free forever.

30+Free Tools
4Solve Methods
10Max Variables
100%Free
📊 Z-Score ⚙ Sieve Analysis 📏 Chi-Square Test 💧 Water Budget ⚡ Moment of Inertia 🔫 Truss Analysis 🧪 Raw Score 🏦 Gale's Table