Enter System of Equations
Enter coefficients for each equation. The constant (right-hand side) is highlighted in blue. Custom variable names are editable.
Matrix Properties & Eigenvalues
Solve a system first. Properties update automatically after each solve.
Step-by-Step Solution
Graph (2-variable: Lines | 3-variable: Planes)
For 2-variable systems each equation is a line; the red dot is the solution. For 3-variable systems, each equation is a plane (projected as lines through the solution point).
Sensitivity Analysis
After solving, perturb any coefficient or constant using the sliders and see how the solution changes. This reveals which equations and coefficients most affect the answer.
Solution History (Last 5)
Click any previous solution to reload it into the solver.
Theory & Solution Methods
A system of linear simultaneous equations consists of n equations and n unknowns where all variables appear only with exponent 1. In matrix form:
A = n×n coefficient matrix | x = unknown vector | b = constant vector
If det(A) ≠ 0, A is invertible and the unique solution is x = A⁻¹b. The inverse is computed as:
adj(A) = transpose of the cofactor matrix | Cij = (-1)i+j × Mij (minor of element i,j)
The augmented matrix [A|b] is reduced to upper triangular form using three elementary row operations: row swap, row scaling, and row addition. Partial pivoting selects the row with the largest absolute pivot element to minimise rounding error. Back-substitution then recovers x from the triangular system:
xi = (b'i − ∑j>i uij xj) / uii (i = n−1, n−2, ..., 1)
where uij are elements of upper triangular matrix U
Ai = matrix A with column i replaced by vector b | works exactly for n = 2 or 3
Elegant for n = 2 or 3. Requires n+1 determinant evaluations; impractical for large n.
Factorises A = L·U where L is unit lower triangular (diagonal = 1) and U is upper triangular:
Forward substitution (L · y = b):
yi = bi − ∑j<i lij yj
Back substitution (U · x = y):
xi = (yi − ∑j>i uij xj) / uii
Advantage: reuse L and U for multiple right-hand sides b without re-factorising
| det(A) | Rank condition | Solution type | Geometric (2D) |
|---|---|---|---|
| ≠ 0 | rank(A) = n | Unique solution | Lines cross at one point |
| = 0 | rank(A) = rank([A|b]) < n | Infinite solutions | Lines are coincident |
| = 0 | rank(A) < rank([A|b]) | No solution | Lines are parallel |
The eigenvalues λ of A satisfy det(A − λI) = 0 (the characteristic equation). Each eigenvalue has a corresponding eigenvector v where Av = λv. Eigenvalues reveal:
| Eigenvalue property | What it means |
|---|---|
| All λ > 0 | A is positive definite (common in structural stiffness matrices) |
| Any λ = 0 | A is singular (det = 0), no unique solution |
| Mixed sign λ | A is indefinite (saddle point in optimisation) |
| λ1 = max, λn = min | Condition number κ = |λ1 / λn| (spectral definition) |
Well-conditioned: κ ≈ 1 | Ill-conditioned: κ ≫ 1 | Singular: κ = ∞
For ill-conditioned systems (κ > 1000), floating-point errors may corrupt the last few significant digits. The sensitivity analysis in this tool reveals which coefficients cause the most instability.
Δx ≈ −A-1 · (ΔA) · x (perturbation in A)
Relative error bound: ∥Δx∥ / ∥x∥ ≤ κ(A) · ∥Δb∥ / ∥b∥
This formula explains why a high condition number amplifies errors: a 1% change in b can cause up to κ% change in x.
| Application | System represents | Variables | Typical size |
|---|---|---|---|
| Electrical Circuits | KVL / KCL mesh equations | Branch currents, node voltages | 2–10 |
| Structural Analysis | Stiffness matrix K·u = F | Nodal displacements | 3–1000+ |
| Heat Transfer (FDM) | Finite difference temperature nodes | Node temperatures | 3–100 |
| Fluid / Pipe Network | Hardy-Cross method | Pipe flow rates, heads | 3–20 |
| Economics (Leontief) | Input-output production model | Industry output levels | 5–50 |
| Traffic Flow | Conservation at intersections | Road flow rates | 3–15 |
| Chemistry Mixtures | Mass/mole balances | Component concentrations | 2–8 |
| Method | Time Complexity | Best for | Numerical stability |
|---|---|---|---|
| Matrix Inverse | O(n³) | n ≤ 4, multiple RHS | Good (with pivoting) |
| Gaussian Elim. | O(n³/3) | General purpose | Excellent (partial pivot) |
| Cramer's Rule | O((n+1)!) | n = 2 or 3 only | Poor for large n |
| LU Decomposition | O(n³/3) + O(n²) | Multiple RHS reuse | Excellent (with pivoting) |
Frequently Asked Questions
1. What is a system of simultaneous linear equations?
A system of simultaneous linear equations is a set of n equations, each containing n unknowns, where every term is either a constant or a constant multiplied by a variable (no powers, products, or trigonometric functions of variables). A solution is a set of values for all unknowns that satisfies all equations simultaneously. The system has a unique solution, no solution, or infinitely many solutions depending on the determinant and rank of the coefficient matrix.
2. How many variables can this tool handle?
This tool handles systems from 2 to 10 variables (equations). For 2-variable systems it plots both lines and their intersection on a 2D graph. For 3-variable systems, a projected 3D plane view is also available. All four methods (matrix inverse, Gaussian elimination, Cramer's rule, LU decomposition) work for 2 to 10 variables.
3. What is LU decomposition and when is it better than Gaussian elimination?
LU decomposition factorises the coefficient matrix A into a product L*U where L is unit lower triangular and U is upper triangular. The factorisation costs O(n^3/3) operations, same as Gaussian elimination, but the key advantage is that once L and U are computed, solving for different right-hand side vectors b requires only O(n^2) additional work each time. This makes LU decomposition significantly more efficient when the same matrix A appears with many different b vectors, such as in structural analysis with multiple load cases.
4. What is Cramer's Rule and when should I use it?
Cramer's Rule solves x_i = det(A_i)/det(A) where A_i replaces column i of A with vector b. It is mathematically elegant and easy to implement by hand for n = 2 or 3. For n >= 4, it requires n+1 full determinant evaluations, making it exponentially more expensive than Gaussian elimination. Use Cramer's rule only for 2x2 or 3x3 systems, or for deriving analytical formulas for symbolic solutions.
5. Why does the solver say the system has no unique solution?
When det(A) = 0, the coefficient matrix is singular. This means either the equations are inconsistent (contradictory, no solution) or dependent (one equation is a linear combination of others, giving infinitely many solutions). Check for duplicate or proportional equations, and verify that all equations are linearly independent.
6. What do eigenvalues tell us about the system?
Eigenvalues lambda of A satisfy det(A - lambda*I) = 0. If any eigenvalue equals zero, the matrix is singular (det = 0) and no unique solution exists. The ratio of the largest to smallest absolute eigenvalue is the spectral condition number, indicating how sensitive the solution is to numerical errors. All positive eigenvalues indicate a positive definite matrix (common in structural stiffness problems).
7. What is the condition number and why does it matter?
The condition number kappa(A) measures how much the solution x can change relative to small changes in the input (matrix A or vector b). Specifically, the relative error in x is bounded by kappa times the relative error in b. A well-conditioned system (kappa near 1) is numerically reliable. An ill-conditioned system (kappa >> 1000) may produce solutions where only a few digits are meaningful.
8. How does the sensitivity analysis work?
The sensitivity section lets you perturb any single coefficient or constant by a percentage using a slider. The tool re-solves the perturbed system and shows how each solution variable changes. This directly demonstrates the perturbation bound: the fractional change in x is bounded by kappa(A) times the fractional change in the perturbed element. Use this to identify which inputs most affect the solution.
9. Can I use custom variable names?
Yes. The variable name editor above the equation grid lets you rename variables from the defaults (x, y, z...) to anything you need: I1, I2, I3 for circuit currents; u1, u2 for displacements; T1, T2 for temperatures. Names appear throughout the solution display and in the LaTeX and CSV exports.
10. What does the LaTeX export produce?
The LaTeX copy button produces a properly formatted aligned equation environment and solution in LaTeX syntax, ready to paste directly into a LaTeX document, Overleaf, or a Markdown renderer that supports LaTeX (such as Jupyter notebooks or some wiki editors).
11. Can this tool handle overdetermined systems?
No. This tool requires exactly n equations for n unknowns (square system). Overdetermined systems (more equations than unknowns) require least-squares methods via the pseudo-inverse. Underdetermined systems (fewer equations) have infinitely many solutions and require additional constraints or minimum-norm methods.
12. What real-world problems use 3x3 or larger systems?
Examples include: Kirchhoff's circuit laws for mesh or nodal analysis (3-6 equations for complex circuits), truss analysis by the method of joints (2 equations per free joint), finite difference heat transfer (as many equations as temperature nodes), traffic flow conservation (one equation per intersection), pipe network Hardy-Cross method, and the Leontief input-output economic model (one equation per industry sector).
13. Why might two methods give slightly different numerical answers?
Different algorithms accumulate floating-point rounding errors in different orders. Matrix inverse and Gaussian elimination may produce results differing by amounts around 10^-12 for well-conditioned systems. These tiny differences are not mathematical errors but reflect the 64-bit floating-point representation of real numbers. The verification step shows the residual for each equation to confirm correctness.
14. What is partial pivoting and why does Gaussian elimination use it?
Without pivoting, Gaussian elimination can divide by very small pivot elements, greatly amplifying rounding errors. Partial pivoting swaps rows to always put the row with the largest absolute value in the pivot column first before elimination. This keeps all multipliers at most 1 in absolute value, which bounds error growth and makes the algorithm numerically stable for all well-conditioned systems.
15. How do I enter fractional or decimal coefficients?
Type the decimal value directly into any coefficient box (e.g., 0.5 for 1/2, or 0.333 for 1/3). The inputs accept any numeric value including negatives and decimals. For very small or large coefficients, scientific notation input is also accepted in most browsers (e.g., 1e-3 for 0.001).
Explore More Engineering Tools
From statistics and fluid mechanics to structural analysis and data science — all the calculators engineers and students actually need, free forever.