Login
Section Science

Numerical Simulation and Error Analysis of the Newton–Raphson Method for Solving Nonlinear Equations

Vol. 11 No. 1 (2026): June :

Abbas Mohsin Kazar (1)

(1) Teacher at Al-Shahimiya Middle School for Boys / Al-Suwaira Education Department, Wasit Governorate, Iraq
Fulltext View | Download

Abstract:

General Background: Nonlinear equations frequently arise in engineering and applied sciences and require reliable iterative numerical techniques for accurate root approximation. Specific Background: Among classical root-finding approaches, the Newton–Raphson method utilizes first-order Taylor series expansion and derivative information to generate successive approximations with theoretically quadratic convergence. Knowledge Gap: Despite its theoretical advantages, detailed numerical simulation integrating convergence verification, residual decay analysis, sensitivity to initial guesses, and comparison with alternative classical methods across representative nonlinear equations remains limited in a unified framework. Aims: This study analytically derives the Newton–Raphson iteration from Taylor expansion, verifies its quadratic convergence, implements a complete Python-based simulator with iteration logging, and evaluates performance using four representative nonlinear equations, including polynomial, transcendental, and exponential cases. Results: Numerical simulations confirm machine-precision solutions within four to five iterations, empirical convergence order p≈2.00±0.05p \approx 2.00 \pm 0.05p≈2.00±0.05, rapid residual decay, and substantially fewer iterations compared with bisection and secant methods, while also demonstrating sensitivity patterns related to initial guesses near inflection or flat regions. Novelty: The study integrates rigorous derivation, numerical simulation, graphical convergence interpretation, residual evaluation, and comparative benchmarking within a single structured analysis of four canonical test equations. Implications: These findings provide validated guidance for selecting initial approximations, interpreting convergence behavior, and applying the Newton–Raphson method efficiently in numerical analysis and engineering computation contexts.


Highlights:


• Machine-Precision Solutions Obtained Within Five Iterations Across Representative Nonlinear Test Cases
• Iteration Counts Substantially Lower Than Classical Interval-Halving and Derivative-Free Alternatives
• Convergence Behavior Varies With Starting Approximation Near Inflection or Flat Function Regions


Keywords: Newton Raphson Method, Nonlinear Equations, Quadratic Convergence, Numerical Simulation, Error Analysis.

Downloads

Download data is not yet available.

1. Introduction

Nonlinear algebraic equations find many applications in the engineering and applied science field: structural equilibrium systems, thermodynamic phases calculation system, control system design, circuit analysis, among others, demand efficient root-finding algorithms. Even though nonlinearities in engineering are practically transcendental or of high degree, and thus solutions are only possible in polynomials up to degree four using iterative numerical techniques. [1].

Out of all classical iterative root-finding systems, namely bisection and false position, fixed-point iteration and secant method as well as higher-order variants, the NewtonRaphson (N-R) method is the most simple and the fastest to converge. It was independently attributed to both Isaac Newton (1669) and Joseph Raphson (1690), and uses derivative information to build a tangent line to the curve f(x) at each iteration, and use the x-intercept of that tangent as a new approximation [2, 3].

where x* meets the f(x*) = 0 condition, the N -R iteration is obtained by expanding f(x) in a Taylor series about the current estimate xn, and keeping only the first two terms:

The basic iteration formula is obtained by solving the equation x=x*:

It has the following major aims: (i) to obtain the method in a rigorous manner and to show that it converges quadratically, (ii) to run the algorithm in a complete Python simulator with automatic logging of iteration, (iii) to test the algorithm on four canonical equations, (iv) to compare the methods with bisection and secant and (v) to graphically illustrate all results[4, 5].

2. Mathematical Foundations

2.1 Derivation from Taylor Expansion

Let f : ℝ → ℝ be a sufficiently smooth function and let x* be a simple root, i.e., f(x*) = 0 and f′(x*) ≠ 0. Expanding f about an approximation xₙ yields:

where ξ lies between x and xₙ. Setting x = x* and neglecting the quadratic remainder yields the linear approximation whose zero defines xₙ₊₁ according to Equation (1) [6, 7].

2.2 Convergence Analysis

Define the error at step n as eₙ = xₙ − x*. Subtracting x* from both sides of Equation (1):

Expanding f(xₙ) and f′(xₙ) about x*:

Substituting (4) and (5) into (3) and simplifying gives the fundamental error-propagation relation:

This demonstrates that the N–R method possesses quadratic (second-order) convergence: the error at step n+1 is proportional to the square of the error at step n. The asymptotic error constant is C = |f″(x*) / (2 f′(x*))| [8-10].

2.3 Sufficient Conditions for Convergence (Kantorovich Theorem)

If f ∈ C²[a,b] and there exist positive constants M₁, M₂, M₃ such that |f(x₀)| ≤ M₁, |1/f′(x)| ≤ M₂, and |f″(x)| ≤ M₃ on the interval, convergence is guaranteed when the Kantorovich condition is satisfied [9]:

2.4 Stopping Criteria

Three complementary stopping criteria are employed in this work:

All simulations use ε₁ = 10⁻¹⁰, which corresponds to approximately ten significant decimal digits of accuracy [11].

3. Methodology

3.1 Test Cases

Four nonlinear equations were chosen so as to cover a variety of mathematical behaviours experienced in engineering practice:

Table 1. Summary of test cases of the numerical experiments.

3.2 Implementation

The complete Python implementation accepts any differentiable function f and its derivative f′, an initial guess x₀, an absolute tolerance ε, and a maximum iteration count. At each step it records xₙ, f(xₙ), f′(xₙ), xₙ₊₁, and the absolute step size |Δx|. The simulation terminates when |Δx| < ε or the maximum iteration count is reached .

3.3 Comparison Methods

Two classical alternatives were implemented for benchmark comparison. The bisection method brackets the root in [a,b] and halves the interval iteratively, guaranteeing convergence with linear order p = 1. The secant method approximates f′(xₙ) via finite differences, achieving super-linear convergence p ≈ 1.618 without requiring an analytic derivative [12, 13].

4. Results and Discussion

4.1 Iteration Histories

Table 2 presents the complete iteration record for f(x) = x³ − 2x − 5. Starting from x₀ = 2.0, the method converges to machine precision within five iterations, with the absolute step size dropping by approximately two orders of magnitude at each step—consistent with the quadratic convergence of Equation (6).

Table 2. Newton–Raphson iteration history for f(x) = x³ − 2x − 5 (x₀ = 2.0, ε = 10⁻¹⁰).

The transcendental equation f(x) = cos(x) − x reaches machine precision in four iterations from x₀ = 0.5. Its unique solution x* ≈ 0.7391 (the Dottie number) is a classical iterative-algorithm benchmark.

4.2 Geometric Interpretation

Figure 1. Geometric illustration: tangent lines at successive iterates converge rapidly to the root (left: x³−2x−5; right: cos(x)−x).

4.3 Convergence Histories

Figure 2 shows the absolute error in semi-log scale. The slope of the nearly straight-line descent with slope ≈2 per step clearly proves quadratic convergence. The perfect quadratic decay curve is presented as dashed reference lines.

Figure 2. Convergence history: absolute error |xₙ₊₁ − xₙ| vs. iteration on semi-log scale for all four test cases.

4.4 Order of Convergence Verification

Figure 3 confirms p ≈ 2.00 ± 0.05 for Cases 1 and 2 via log–log regression, in perfect agreement with Equation (6).

Figure 3. Log–log plot of successive errors eₙ₊₁ vs. eₙ. Slope ≈2 confirms quadratic convergence.

4.5 Sensitivity to the Initial Guess

Figure 4 maps iterations-to-convergence as a function of x₀. Regions requiring many iterations correspond to starting points near inflection points or flat regions of f. A graphical pre-inspection is recommended[14, 15].

Figure 4 . Iterations to convergence vs. initial guess x₀ for Cases 1 (left) and 2 (right).

4.6 Method Comparison

Figure 5 compares N–R, bisection, and secant on f(x) = x³ − 2x − 5. Bisection requires ≈33 iterations; secant ≈10; Newton–Raphson only 5—demonstrating the decisive advantage of quadratic convergence.

Figure 5. Method comparison: Newton–Raphson vs. Bisection vs. Secant. Absolute error |xₙ − x*| on semi-log scale.

4.7 Residual Decay

Figure 6 shows that |f(xₙ)| drops by 4–5 orders of magnitude per iteration, confirming both positional accuracy and genuine smallness of the function value at the computed root.

Figure 6. Residual |f(xₙ)| vs. iteration for all test cases on semi-log scale.

Table 3. Summary of Newton–Raphson performance (ε = 10⁻¹⁰).

5. Python Source Code

The following listing provides the complete Python implementation. Only NumPy and Matplotlib are required.

6. Conclusions

1. This formula was based on the first-order Taylor expansion and quadratic convergence has been demonstrated. The convergence speed is controlled by the asymptotic error constant C = |f″(x*)/(2f′(x*))| .

2. The theoretical analysis was validated by numerical simulation of four canonical equations (with machine-precision accuracy, i.e. (|f(x*)| < 10⁻¹⁵).

3. The empirical order of convergence p ≈ 2.00 ± 0.05 in all the test cases justifies the quadratic convergence forecast of Equation (6).

4. Newton -Raphson takes 85 % less iterations than bisection and about ≈50 % as many as the secant method to obtain the same precision.

5. Sensitivity analysis revealed convergence to be broad where well-behaved functions are concerned but caution should be observed around points of inflection or flatness. It is highly advisable to check up the graph before commencing.

6. The residual value of |f(xₙ)|decreased super-exponentially in each of the cases, ensuring the positional accuracy and the true smallness of the value of the function at the root obtained.

References

W. H. Press, Numerical recipes 3rd edition: The art of scientific computing. Cambridge university press, 2007.

H. Klee and R. Allen, Simulation of dynamic systems with MATLAB® and Simulink®. Crc Press, 2018.

J. Ehiwario and S. Aghamie, "Comparative study of bisection, Newton-Raphson and secant methods of root–finding problems," IOSR J. of Engineering, vol. 4, no. 4, pp. 1-7, 2014.

M. A. Hossain, P. M. Menz, and J. M. Stockie, "An open-access clicker question bank for numerical analysis," PRIMUS, vol. 32, no. 8, pp. 858-880, 2022.

A. J. Torii and J. R. d. Faria, "Structural optimization considering smallest magnitude eigenvalues: a smooth approximation," Journal of the Brazilian Society of Mechanical Sciences and Engineering, vol. 39, no. 5, pp. 1745-1754, 2017.

L. N. Trefethen and D. Bau, Numerical linear algebra. SIAM, 2022.

J. E. Dennis Jr and R. B. Schnabel, Numerical methods for unconstrained optimization and nonlinear equations. SIAM, 1996.

C. T. Kelley, Solving nonlinear equations with Newton's method. SIAM, 2003.

E. Suli and D. F. Mayers, An introduction to numerical analysis. Cambridge university press, 2003.

L. V. Kantorovich, "On Newton’s method for functional equations," in Dokl. Akad. Nauk SSSR, 1948, vol. 59, no. 7, pp. 1237-1240.

G. H. Golub and F. Charles, "Van Loan: Matrix Computations," Johns Hopkins, Baltimore, vol. 307, 1996.

E. Badr, S. Almotairi, and A. E. Ghamry, "A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods," Mathematics, vol. 9, no. 11, p. 1306doi: 10.3390/math9111306.

A. Shaimbetova and B. Shambetova, "A Numerical Comparison of the Bisection Method and Newton’s Method," 2025.

J. R. J. Thompson, "Iterative smoothing for change-point regression function estimation," (in eng), J Appl Stat, vol. 51, no. 16, pp. 3431-3455, 2024, doi: 10.1080/02664763.2024.2352759.

R. Sambharya, G. Hall, B. Amos, and B. Stellato, "Learning to warm-start fixed-point optimization algorithms," Journal of Machine Learning Research, vol. 25, no. 166, pp. 1-46, 2024.