Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Numerical Algorithms

https://doi.org/10.1007/s11075-022-01412-w

ORIGINAL PAPER

A simple yet efficient two‑step fifth‑order


weighted‑Newton method for nonlinear models

Harmandeep Singh1 · Janak Raj Sharma1   · Sunil Kumar2

Received: 28 March 2022 / Accepted: 8 September 2022


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
2022

Abstract
An iterative method with fifth order of convergence for solving nonlinear systems
is formulated and analyzed. Primary goal of the development of method is to keep
the both, convergence order and computational efficiency, as high as possible. Both
of these factors are investigated thoroughly in theoretical as well as numerical man-
ner. Comparison with the existing iterative methods of similar order is carried out
to examine the performance of method. Numerical testing clearly indicates the high
degree of precision and efficiency of the new method.

Keywords  System of nonlinear equations · Iterative methods · Convergence ·


Computational efficiency

Mathematics Subject Classifications (2010)  65H10 · 65J10 · 49M15

1 Introduction

To approximate the solution of nonlinear equations using the iterative meth-


ods, the foremost task is to build a numerical algorithm which shows consist-
ency and stability in the broad range of problems. Notably, the challenge of con-
structing computationally efficient methods has gained much importance with the

* Janak Raj Sharma


[email protected]
Harmandeep Singh
[email protected]
Sunil Kumar
[email protected]
1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology,
Longowal 148106, Punjab, India
2
Department of Mathematics, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,
Chennai 601103, India

13
Vol.:(0123456789)
Numerical Algorithms

advancement of technology. This challenge has led to the development of numer-


ous variants of classical Newton’s method (see [1–19] and references therein),
which is the most commonly applied iterative method to solve the nonlinear sys-
tem of equations.
The system of nonlinear equations can be mathematically formulated as
F(x) = 0, (1.1)
where F ∶ D ⊆ ℝm → ℝm  , a nonlinear operator, generally represented as
(f1(x),f2(x),…,fm(x))T with fi ∶ ℝm → ℝ (i = 1, … , m) as nonlinear scalar functions
and x = (x1,…,xm)T ∈ D.
Under the condition of continuous differentiability of F(x), the quadratically con-
vergent Newton’s method approximates the solution, x*∈ D of system (1.1), itera-
tively as

x(k+1) = x(k) − F � (x(k) )−1 F(x(k) ), k = 0, 1, 2, … , (1.2)


provided the initial estimate (x(0)) is chosen sufficiently close to the root. Here,
F � (x) ∈ L(ℝ ] ) is a linear operator which is commonly represented as a Jaco-
m m
[ ,ℝ
bian matrix 𝜕x  . Clearly, method (1.2) requires one function evaluation (F), one
𝜕fi
j m×m
Jacobian matrix (F � ) and one matrix inversion (F �−1 ) per iteration.
Let e(k) = x(k) − x* be the local error at the kth iteration, the order of convergence
(p) for an iterative method can be estimated from the error equation (see [2]),
p p+1
e(k+1) = Le(k) + O(e(k) ),
where L ∈ L(ℝm × p−times
… … ×ℝm , ℝm ) is a p-linear operator and L is a set of bounded
linear operators.
Iterative techniques with improving the convergence order of Newton’s method,
at the expense of additional functional evaluations and matrix inversions per itera-
tion, can be found in extensively available literature in the field of theory of iterative
methods. For example, the methods presented in [5, 8–10, 12–14] require two evalu-
ations each of F, F ′ and F �−1 , the methods in [6, 16] require two evaluations each of
F, F ′ and three of F �−1 , whereas method in [7] requires evaluations of two F, three F ′
and three F �−1 . But the fact is that additional evaluations lead to increase the compu-
tational cost in terms of mathematical operations per iteration. In [5], the necessary
parameters were introduced in order to evaluate the cost of all computations in unit
of multiplications and the concept of computational efficiency index [1, 3] was re-
formulated to compare the efficiencies of different iterative methods.
Optimizing the computational cost with the increasing order of convergence is
not a matter of chance, rather it is considered as a daunting piece of work. Taking
this as point of interest, in this paper, we shall present a simple yet efficient fifth-
order iterative method involving two function evaluations, two Jacobian evaluations
and two matrix inversions per iteration. The iterative scheme consists of two steps
of which the first step is a second order Newton iteration and the second step is
weighted-Newton iteration.

13
Numerical Algorithms

Contents of the rest of paper are summarized here. Section 2 includes develop-
ment and convergence analysis of the two-step iterative method. In Section  3, we
compare the proposed method with existing methods in the context of computa-
tional efficiency. Numerical experimentation is performed in Section 4 to verify the
theoretical deductions. Concluding remarks are presented in Section 5.

2 Development of method

To solve a nonlinear system (1.1), we propose a two-step iterative scheme given as

w(k) = x(k) − F(� (x(k) )−1 F(x(k) ), )


x(k+1) = w(k) − p1 + p2 F(w
(k) )T F(w(k) )
F � (w(k) )−1 F(w(k) ), (2.1)
(k) T
F(x ) F(x ) (k)

where p1, p2 are the parameters and the superscript “T” in second step represents the
transpose of a vector function. Observe that, F(t)TF(t) = ∥F(t)∥2 under the Euclidean
norm. Clearly, the first step of (2.1) is a Newton iteration and the second step is
weighted-Newton iteration.
To discuss the convergence behavior of the proposed scheme, we first state a
result in the form of a lemma to be used for developing Taylor’s expansion of vector
functions (see [2]).

Lemma 1 Let F ∶ D ⊆ ℝm → ℝm be a p-times Fréchet differentiable function in the


open convex set D ∈ ℝm , then for any x, t ∈ D , the following result holds:
1 �� 2 1
F(x + t) = F(x) + F � (x)t + F (x)t + … + F (p−1) (x)t(p−1) + Rp ,
2! (p − 1)!
(2.2)
where ti = (t, … …, t), F (i) (q) ∈ L(ℝm × …
i−times
… ×ℝm , ℝm ) for each i = 1,2,…, and
i−times

‖Rp ‖ ≤
1
sup ‖F (p) (x + ht)‖ ‖t‖p .
p! 0<h<1

In what follows, we shall prove the fifth-order convergence of method (2.1) under
the conditions specified in Lemma 1 and theorem itself.

Theorem 1 Suppose F ∶ D ⊆ ℝm → ℝm be a sufficiently differentiable function in


an open convex set D and x*∈ D be a simple solution of the nonlinear system F(x)
= 0. Further, assume that F � (x) is continuous and non-singular at x* and the initial
estimate x(0) is sufficiently close to the solution. Then, the sequence generated by
method (2.1) converges to the solution x* with the fifth order of convergence, pro-
vided p1 = p2 = 1.

13
Numerical Algorithms

Proof Let e(k) = x(k) − x* be the local error at kth iteration. Then, using Lemma 1 and
the fact that F(x*) = 0, the Taylor series expansions of F(x(k)) and F � (x(k) ) , about the
solution x*, are obtained as

F(x(k) ) = F � (x∗ )[e(k) + A2 e(k) + A3 e(k) + A4 e(k) + A5 e(k) ] + O(e(k) ), (2.3)


2 3 4 5 6

F � (x(k) ) = F � (x∗ )[I + 2A2 e(k) + 3A3 e(k) + 4A4 e(k) + 5A5 e(k) ] + O(e(k) ), (2.4)
2 3 4 5

where …, e ) , Ai = i!1 F � (x∗ )−1 F (i) (x∗ ), i = 1, 2, 3, … . and


i i−times (k)
e(k) = (e(k) , …
consequently

F � (x(k) )−1 = [I + B1 e(k) + B2 e(k) + B3 e(k) + B4 e(k) ]F � (x∗ )−1 + O(e(k) ), (2.5)
2 3 4 5

where B1 = −2A2 , B2 = −3A3 + 4A22 , B3 = −4A4 + 6A2 A3 + 6A3 A2 − 8A32 , and


B4 = −5A5 + 8A2 A4 + 9A33 + 8A4 A2 − 12A22 A3 − 12A2 A3 A2 − 12A3 A22 + 16A42.
Let e(k)
w =w
(k)
− x∗ be the local error at the first step of method (2.1). Then, by
substituting the results of (2.3)–(2.5) in the first step of (2.1), we get in turn that
2 3 4 5 6
e(k)
w
= C1 e(k) + C2 e(k) + C3 e(k) + C4 e(k) + O(e(k) ), (2.6)

where C1 = A2 , C2 = 2A3 − 2A22 , C3 = 3A4 − 4A2 A3 − 3A3 A2 + 4A32 , and


C4 = 4A5 − 6A2 A4 − 6A23 − 4A4 A2 + 8A22 A3 + 6A2 A3 A2 + 6A3 A22 − 8A42.
Again, using the Taylor expansions of F(w(k)) and F � (w(k) ) about x* and using
(2.6), we obtain that

(2.7)
2 3 4 5 6
F(w(k) ) = F � (x∗ )[K1 e(k) + K2 e(k) + K3 e(k) + K4 e(k) ] + O(e(k) ),

(2.8)
2 3 4 5
F � (w(k) ) = F � (x∗ )[I + L1 e(k) + L2 e(k) + L3 e(k) ] + O(e(k) ),

and

(2.9)
2 3 4 5
F � (w(k) )−1 = [I + M1 e(k) + M2 e(k) + M3 e(k) ]F � (x∗ )−1 + O(e(k) ),

where K1 = A2 , K2 = 2A3 − 2A22 , K3 = 3A4 − 4A2 A3 − 3A3 A2 + 5A32 , K4 = 4A5 − 6A2 A4 − 6A23 − 4A4 A2 + 8A22 A3
+6A2 A3 A2 + 6A3 A22 − 8A42 , L1 = 2A22 , L2 = 4A2 A3 − 4A32 , L3 = 6A2 A4 − 8A22 A3 − 6A2 A3 A2 + 3A3 A22 + 8A42 , M1 =
−2A22 , M2 = −4A2 A3 + 4A32 , and M3 = −6A2 A4 + 8A22 A3 + 6A2 A3 A2 − 3A3 A22 − 4A42.
(k) )T F(w(k) )
Next, we attempt to expand the term F(w F(x(k) )T F(x(k) )
 , which is appearing in the sec-
ond step of (2.1). For any t = (t1 , t2 , … , tm )T ∈ ℝm , we have F(t) = (f1(t), f2(t),…,
fm(t))T with fi (t) ∶ ℝm → ℝ as scalar functions and
∑m 2 (k)
F(w(k) )T F(w(k) ) i=1 fi (w )
= ∑ m 2 (k) . (2.10)
F(x(k) )T F(x(k) ) f (x )
i=1 i

The expansions of fi(x(k)) and fi(w(k)), about the solution x*, are given as (see [2])

fi (x(k) ) = fi� (x∗ )e(k) + 12 fi�� (x∗ )e(k) + O(e(k) ),


2 3
(2.11)

13
Numerical Algorithms

+ 12 fi�� (x∗ )e(k)


2 3
and fi (w(k) ) = fi� (x∗ )e(k)w w
+ O(e(k)
w
), (2.12)
[ 2 ]
where fi� (t) = ( 𝜕t i , … , 𝜕t i ) represents a row vector and fi�� (t) = 𝜕t 𝜕ti is a Hes-
𝜕f 𝜕f 𝜕 f
1 m i j m×m
sian matrix.
Conveniently, we write (2.11) and (2.12) as

(2.13)
2 3
fi (x(k) ) = Ri e(k) + Hi e(k) + O(e(k) ),

2 3
fi (w(k) ) = Ri e(k) (k) (k)
w + Hi ew + O(ew ), (2.14)

where Ri = fi� (x∗ ) and Hi = 12 fi�� (x∗ ).


Moreover, fi(x(k)) being a scalar function, therefore using fi(x(k)) = fi(x(k))T and
(2.13), we obtain in turn that

fi2 (x(k) ) = fi (x(k) )T fi (x(k) )


2 3 4 (2.15)
= RTi Ri e(k) + (RTi Hi + HiT Ri )e(k) + O(e(k) ).

Here, RTi Ri represents a m × m matrix (say, Pi). Further, letting Qi = RTi Hi + HiT Ri ,
we have that
2 3 4
fi2 (x(k) ) = Pi e(k) + Qi e(k) + O(e(k) ). (2.16)

In the similar way, we can obtain


2 3 4
fi2 (w(k) ) = Pi e(k)
w
+ Qi e(k)
w
+ O(e(k)
w
). (2.17)

Substituting the results of (2.16) and (2.17) in (2.10), we get


(k) (k) (k) 2 3 4
F(w(k) )T F(w(k) ) Pew + Qew + O(ew )
= , (2.18)
F(x(k) )T F(x(k) ) Pe(k)2 + Qe(k)3 + O(e(k)4 )
∑ ∑m
where P = m P and Q = i=1 Qi.
i=1 i
Consequently, combining (2.6) and (2.18), and then simplifying, we have

F(w(k) )T F(w(k) ) 2 3 4

(k) T (k)
= A22 e(k) + (2A2 A3 + A3 A2 − 4A32 − P−1 QA22 )e(k) + O(e(k) ).
F(x ) F(x )
(2.19)
Using (2.6), (2.7), (2.9) and (2.19), the second step of method (2.1) yields the
error equation as

(2.20)
2 3 4 5 6
e(k+1) = x(k+1) − x∗ = Y1 e(k) + Y2 e(k) + Y3 e(k) + Y4 e(k) + O(e(k) ),

where Y1 = (1 − p1 )A2 , Y2 = 2(1 − p1 )(A3 − A22 ), Y3 = (1 − p1 )(3A4 − 4A2 A3 − 3A3 A2 ) + (4 − 3p1 − p2 )A32 ,
and Y4 = (1 − p1 )(4A5 − 6A2 A4 − 6A23 − 4A4 A2 ) + (8 − 6p1 − 2p2 )A22 A3 + (6 − 4p1 − 2p2 )A2 A3 A2 +(6 − 6p1 − 2p2 )A3 A22 − (8
−4p1 − 6p2 )A42 + p2 P−1 QA32.

13
Numerical Algorithms

To maximize the convergence order of the proposed method, the values of param-
eters p1 and p2 should be taken appropriately. In that sense, if we choose p1 = p2 = 1,
then the coefficients Y1, Y2 and Y3 become zero and the error (2.20) reduces to
5 6
e(k+1) = (2A42 − 2A3 A22 + P−1 QA32 )e(k) + O(e(k) ).

Hence, the fifth order of convergence for the proposed method is proved.

Thus, the proposed method (2.1) is finally presented as

w(k) = x(k) − F(� (x(k) )−1 F(x(k) ), )


x(k+1) = w(k) − 1 + F(w
(k) )T F(w(k) )
F � (w(k) )−1 F(w(k) ). (2.21)
F(x(k) )T F(x(k) )

It is clear that this formula uses two functions, two derivatives and two matrix
inversions per iteration in the sense of computational cost. For future reference, the
method is denoted by M1.

3 Computational complexity

Ranking numerical methods based on their computational efficiency is a difficult


task since the quality of an algorithm depends on many parameters. Considering
root-solvers, Brent [20] said that “...the method with the higher efficiency is not
always the method with the higher order.” Sometimes, a great accuracy of the sought
results is not the primary target, in many cases the efficiency of implemented algo-
rithm is the preferable feature.
Computational efficiency of a root-solver, which is the subject of this section, can
be defined in various manners, but always proportional to order of convergence p
and inversely proportional to computational cost C per iteration — the number of
function evaluations taking with certain weights. Traub [3] introduced the measure
of efficiency by the ratio
p
ET = ,
C
whereas Ostrowski [1] dealt with alternative definitions
log p
EO1 = p1∕C and EO2 = .
C
An interesting question arises: Which of these definitions describes computational
efficiency in the best way in practice when iterative methods are implemented on
digital computers? This will be clarified in what follows.
To solve nonlinear equations, assume that the tested equation has a solution x*
contained in an interval (n-cube or n-ball in general) of unit diameter. Starting with
an initial approximation x(0) to x*, a stopping criterion is given by

13
Numerical Algorithms

||x(k) − x∗ || ≤ 𝜏 = 10−d ,
where k is the iteration index, τ is the required accuracy and d is the number of sig-
nificant decimal digits of the approximation x(k). Assume that ||x(0) − x*||≈ ­10− 1 and
let p be the order of convergence of the applied iterative method. Then, the (theo-
retical) number of iterative steps, necessary to reach the accuracy τ, can be calcu-
lated approximately from the relation 10−d = 10−p as k ≈ log d∕ log p . Taking into
k

account that the computational efficiency is proportional to the reciprocal value of


the total computational cost, kC, of the completed iterative process consisting of k
iterative steps, one gets the estimation of computational efficiency

1 1 log p
E= = . (3.1)
kC log d C

For the function F(x) = (f1(x),f2(x),...,fm(x))T, where x = (x1,x2,...,xm)T, the computa-


tional cost C is computed as (see [5])
C(𝜈0 , 𝜈1 , m, l) = P0 (m)𝜈0 + P1 (m)𝜈1 + P(m, l), (3.2)
where P0(m) represents the number of evaluations of scalar functions used in the
evaluation of F, P1(m) is the number of evaluations of scalar functions of F ′ , i.e., 𝜕xi  ,
𝜕f
j
1 ⩽ i, j ⩽ m , and P(m,l) represents the number of products or quotients needed per
iteration. In order to express the value of C(ν0,ν1,m,l) in terms of products, the ratios
ν0 > 0 and ν1 > 0 between products and evaluations and a ratio l > 1 between prod-
ucts and quotients are required.
It is clear from the above discussion that estimating the computational efficiency
of iterative methods for some fixed accuracy, it is sufficient to compare the values of
log p∕C . This means that the second Ostrowski’s formula, EO2 = log p∕C , is prefer-
able in the sense of the best fitting a real CPU time. Let us note that this formula was
used in many manuscripts and books, see, e.g., McNamee [4], Brent [20].
We shall make use of the definition (3.1) for assessing the computational effi-
ciency of presented method. To do this we must consider all possible factors which
contribute to the total cost of computation. For example, to compute F in any itera-
tive method, we need to calculate m scalar functions. The number of scalar evalua-
tions are m2 for any new derivative F ′ . In order to compute an inverse linear opera-
tor, we solve a linear system where we have m(m − 1)(2m − 1)/6 products and m(m
− 1)/2 quotients in the LU decomposition, and m(m − 1) products and m quotients in
the resolution of two triangular linear systems. We must add m2 products for mul-
tiplication of a matrix with a vector or of a matrix by a scalar and m products for
multiplication of a vector by a scalar.
In order to demonstrate the computational efficiency, we compare the new method
M1 with existing fifth-order methods by Grau-Grau-Noguera [5], Cordero-Hueso-
Martínez-Torregrosa [6], Xu-Jieqing [7], Sharma-Gupta [8], Xiao-Yin [9], Cordero-
Gómez-Torregrosa [12], Xiao-Yin [13], Solaiman-Hashim [16] and Arroyo-Cord-
ero-Torregrosa [19]. For ready reference, these methods are shown below.
Method by Grau et al. (M2):

13
Numerical Algorithms

y(k) = x(k) − F � (x(k) )−1 F(x(k) ),


[ ]
z(k) = x(k) − 12 F � (x(k) )−1 + (F � (y(k) )−1 F(x(k) ),
x(k+1) = z(k) − F � (y(k) )−1 F(z(k) ).

Method by Cordero et al. (M3):

y(k) = x(k) − F � (x(k) )−1 F(x(k) ),


[ ]−1
z(k) = x(k) − 2 F � (y(k) ) + F � (x(k) ) F(x(k) ),
x(k+1) = z(k) − F � (y(k) )−1 F(z(k) ).

Xu-Jieqing method (M4):

y(k) = x(k) − F � (x(k) )−1 F(x(k) ),


[ (k) (k)
]−1
z(k) = x(k) − 4 3F � ( 2x 3+y ) + F � (y(k) ) F(x(k) ),
x(k+1) = z(k) − F � (y(k) )−1 F(z(k) ).

Sharma-Gupta method (M5):

y(k) = x(k) − 21 F � (x(k) )−1 F(x(k) ),


z(k) = x(k) − [F � (y(k) )−1 F(x(k) ), ]
(k+1)
x = z(k) − 2F � (y(k) )−1 − F � (x(k) )−1 F(z(k) ).

Xiao-Yin method (M6):

y(k) = x(k) − 23 F � (x(k) )−1 F(x(k) ),


[ ]−1
z(k) = x(k) − [4 3F � (y(k) ) + F � (x(k) ) F(x(k) ), ]
[ ]−1
x(k+1) = z(k) − 8 3F � (y(k) ) + F � (x(k) ) − F � (x(k) )−1 F(z(k) ).

Method by Cordero et al. (M7):


y(k) = x(k) − [F � (x(k) )−1 F(x(k) ), ]
5
x (k+1)
= y(k) − 4
I − 12 (F � (y(k) )−1 F � (x(k) )) + 14 (F � (y(k) )−1 F � (x(k) ))2 F � (y(k) )−1 F(y(k) ).

Xiao-Yin method (M8):

y(k) = x(k) − 32 F � (x(k) )−1 F(x(k) ),


[ ]
z(k) = x(k) − 41 3F � (y(k) )−1 + F � (x(k) )−1 F(x(k) ),
[ ]
x(k+1) = z(k) − 32 F � (y(k) )−1 − 12 F � (x(k) )−1 F(z(k) ).

Solaiman-Hashim method (M9):

y(k) = x(k) − 21 F � (x(k) )−1 F(x(k) ),


z(k) = x(k) − F � (y(k) )−1 F(x(k) ),
[ ]−1
x(k+1) = z(k) − 2F � (y(k) ) − F � (x(k) ) F(z(k) ).

13
Numerical Algorithms

Method by Arroyo et al. (M10):

y(k) = x(k) − F � (x(k) )−1 F(x(k) ),


z(k) = y(k) − 5F � (x(k) )−1 F(y(k) ),
[ ]
x (k+1)
= z(k) − 15 F � (x(k) )−1 − 16F(y(k) ) + F(z(k) ) .

Denote the efficiency indices of the methods Mi (i = 1,2,3,…,10) by Ei and com-


putational costs by Ci. Then, taking into account the above considerations, we calcu-
late the computational costs and corresponding efficiency indices as follows:
1 log 5
C1 = 2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m)) and E1 = D C1
.
m 1 log 5
C2 = 2m𝜈0 + 2m2 𝜈1 + 3 (2m2 + 6m − 5 + 3l(2 + m)) and E2 = D C .
2
log 5
C3 = 2m𝜈0 + 2m2 𝜈1 + m2 (2m2 + 3m − 3 + 3l(1 + m)) and E3 = D1 C .
3
log 5
C4 = 2m𝜈0 + 3m2 𝜈1 + m2 (2m2 + 5m + 1 + 3l(1 + m)) and E4 = D1 C .
4
log 5
C5 = 2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 9m − 5 + 3l(3 + m)) and E5 = D1 C .
5
log 5
C6 = 2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 12m − 2 + 3l(3 + m)) and E6 = D1 C .
6
log 5
C7 = 2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 15m − 2 + 3l(3 + m)) and E7 = D1 C .
7
log 5
C8 = 2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 9m + 4 + 3l(3 + m)) and E8 = D1 C .
8
log 5
C9 = 2m𝜈0 + 2m2 𝜈1 + m2 (2m2 + 5m − 3 + 3l(1 + m)) and E9 = D1 C .
9
log 5
C10 = 3m𝜈0 + m2 𝜈1 + m6 (2m2 + 15m + 1 + 3l(5 + m)) and E10 = D1 C .
10

Here D = log d.

3.1 Comparison between efficiency

To compare the iterative methods, say Mi versus Mj, respectively of order p and q,
we consider the ratio

Ei Cj log(p)
Ri,j = = . (3.3)
Ej Ci log(q)

It is clear that if Ri,j > 1, the iterative method Mi is more efficient than Mj, which will
be denoted as Mi ⋗ Mj . In the sequel, we consider the comparison of computational
efficiencies of the methods as defined above.
M1 versus M2 case: In this case the ratio is

2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 6m − 5 + 3l(2 + m))


R1,2 = .
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

It can be shown easily that R1,2 > 1 for m ≥ 2, which implies that E1 > E2 and hence
M1 ⋗ M2 for all m ≥ 2 and l > 1.
M1 versus M3 case: In this case the ratio

13
Numerical Algorithms

2m𝜈0 + 2m2 𝜈1 + m2 (2m2 + 3m − 3 + 3l(1 + m))


R1,3 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which implies that E1 > E3 for all m ≥ 2 and l > 1. Therefore, M1 ⋗ M3.
M1 versus M4 case: In this case the ratio

2m𝜈0 + 3m2 𝜈1 + m2 (2m2 + 5m + 1 + 3l(1 + m))


R1,4 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which shows that E1 > E4 for all m ≥ 2 and l > 1. Therefore, M1 ⋗ M4.
M1 versus M5 case: For this case the ratio

2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 9m − 5 + 3l(3 + m))


R1,5 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2. Thus, we conclude that E1 > E5 and hence M1 ⋗ M5 for all m ≥ 2 and l
> 1.
M1 versus M6 case: In this case the ratio

2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 12m − 2 + 3l(3 + m))


R1,6 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which implies that E1 > E6 for all m ≥ 2 and l > 1, that is M1 ⋗ M6.
M1 versus M7 case: In this case the ratio

2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 15m − 2 + 3l(3 + m))


R1,7 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which implies that E1 > E7 and consequently M1 ⋗ M7 for all m ≥ 2 and
l > 1.
M1 versus M8 case: In this case the ratio

2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 9m + 4 + 3l(3 + m))


R1,8 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which shows that E1 > E8 and hence M1 ⋗ M8 for all m ≥ 2 and l > 1.
M1 versus M9 case: In this case the ratio

2m𝜈0 + 2m2 𝜈1 + m2 (2m2 + 5m − 3 + 3l(1 + m))


R1,9 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

for m ≥ 2, which implies that E1 > E9 for all m ≥ 2 and l > 1. Therefore, M1 ⋗ M9.
M1 versus M10 case: In this case the ratio

13
Numerical Algorithms

Fig. 1  Boundary lines for the comparison of M1 and M10

3m𝜈0 + m2 𝜈1 + m6 (2m2 + 15m + 1 + 3l(5 + m))


R1,10 = > 1,
2m𝜈0 + 2m2 𝜈1 + m3 (2m2 + 3m + 4 + 3l(1 + m))

holds only if 𝜈0 > m𝜈1 + 16 (2m2 − 9m + 7 + 3l(m − 3)) . It implies that E1 > E10, and
therefore, M1 ⋗ M10 if 𝜈0 > m𝜈1 + 16 (2m2 − 9m + 7 + 3l(m − 3)) . In this case, the
geometrical comparison of methods is depicted in Fig. 1, where the boundary lines
are presented for the special cases of m = 5,10,25,50 by taking l = 3 in particular.
Let us note that these boundary lines divide the region into two parts, wherein the
method M1 is more efficient than M10 on the upper region of each line.
We summarize the above results in following theorem:

Theorem 2  For all ν0 > 0, ν1 > 0 and l > 1, we have that:

(i) M1 ⋗ Mi for each i = 2,…,9 and


(ii) M1 ⋗ M10 , only if 𝜈0 > m𝜈1 + 16 (2m2 − 9m + 7 + 3l(m − 3)).

4 Numerical experimentation

In this section, the performance of new fifth-order method is assessed and com-
pared numerically with the existing fifth-order methods expressed in the Section 3.
The numerical experimentation is executed by considering the systems of nonlin-
ear equations arising in various practical situations. The numerical computations
are executed using multiple-precision arithmetic in the Mathematica software [21]

13
Numerical Algorithms

Table 1  CPU time and estimation of computational cost of elementary functions



Functions x*y x/y x ex ln(x) sin(x) cos(x) arctan(x)

CPU time 0.0172 0.0484 0.0234 1.5562 1.3469 1.6938 1.6896 2.9797
Cost 1 2.81 1.36 90.48 78.31 98.48 98.23 173.24
√ √
where x = 3 − 1 and y = 5 (with 4096 digits of accuracy)

installed on the machine with specifications: Intel(R) Core (TM) i5-9300H proces-
sor and Windows 10 operating system.
To build a relation between numerical experimentation and the computational
efficiency, as discussed in Section  3, the parameters ν0 and ν1 are required to be
estimated for each considered numerical example. To work out this estimation, the
evaluation cost of each elementary function is expressed in terms of product units
and it must be noted, however, that evaluation cost actually varies with the desired
accuracy of the output. Table 1 displays the elapsed CPU time, measured in milli-
seconds, during the execution of elementary operations using Mathematica and their
estimated cost of evaluation as units of product. Apparently, the cost of division
between two numbers is approximately 2.81 times the cost of product.
The following examples are considered for the numerical tests and the values of
parameters (m,l,ν0, ν1) used in (3.2) are provided in respect of each example.

Example 1  Starting with the system of 2 nonlinear equations:


2
e−x1 − sin(x2 ) = 0,
2
− sin(x1 ) + e−x2 = 0,

and in particular, to obtain the solution

x∗ = (0.6806..., 0.6806...)T ,
( )T
we take the initial estimate as x(0) = 12 , 21  . The concrete values of parameters
{m,l,ν0,ν1}, obtained with the help of estimates of elementary functions as displayed
in Table 1, are {2,2.81,189.96,95.36}.

Example 2  Consider the system of equations:


xi sin(xi+1 ) − 1 = 0, i = 1, 2, ..., m − 1,
and xm sin(x1 ) − 1 = 0.

We solve
( the problem
)T taking m =  5 and choose the initial estimate
x = 4 , 4 , 4 , 4 , 4 to find the solution
(0) 3 3 3 3 3

x∗ = (1.1141..., 1.1141..., 1.1141..., 1.1141..., 1.1141...)T .


Here, the values of parameters are estimated as {m,l,ν0,ν1} = {5,2.81,99.48,39.54}.

13
Numerical Algorithms

Example 3  Next, let us take system of equations as:


( m )

−1
tan (xi ) + 1 − 2 xj = 0, i = 1, 2, ..., m.
j=1,j≠i
( )T
1 10 1
By taking m = 10, we choose the initial estimate x(0) = 10
, ⋯, 10 to obtain the
solution

x∗ = (0.0588..., 0.0588..., ⋯⋯, 0.0588...)T .


Here, the estimated values of parameters are {m,l,ν0,ν1} = {10,2.81,173.24,0.38}.

Example 4  Consider the nonlinear integral equation F(x) = 0, where


1

2∫ 0
1
F(x)(s) = x(s) − 1 + s cos(x(t))dt, (4.1)

with s ∈ [0,1] and x ∈ D = B(0,2) ⊂ X. Here, X = C[0,1] is the space of continuous


functions on [0,1] with the norm,
||x|| = max |x(s)|, where s ∈ [0, 1].
This kind of integral equation, particularly called Chandrasekhar equation (see
[22]), arises in the study of radiative transfer theory, kinetic theory of gases and neu-
tron transfer theory.
The system of nonlinear equations, obtained by discretizing (4.1) using the Trap-
ezoidal rule of integration with the step size h = 1/m, is given as

si ( 1 )
m−1
∑ 1
xi − 1 + cos(x0 ) + cos(xj ) + cos(xm ) = 0, i = 1, 2, ..., m, (4.2)
2m 2 j=1
2

where si = ti = mi  , xi = x(ti) and x0 = 21 . To compare the performance( of new )method


T
1 10 1
with the existing methods, we choose m = 10 and initial estimate as 2
, ⋯, 2  . The
solution of the transformed problem is
x∗ = (0.9655..., 0.9310..., 0.8965..., 0.8620..., 0.8275..., 0.7929..., 0.7584..., 0.7239..., 0.6894..., 0.6549...)T .

In this problem, the specific values of parameters are {m,l,ν0,ν1} =


{10,2.81,108.05,9.85}.

Example 5  Next, we take the system of nonlinear equations as:


xi xi+1 − e−xi − e−xi+1 = 0, i = 1, 2, ..., m − 1,
and xm x1 − e−xm − e−x1 = 0.
( )T
12 15 12
By taking m = 15, we choose the initial estimate x(0) = 10
, ⋯, 10 for the solution

13
Numerical Algorithms

x∗ = (0.9012..., 0.9012..., ⋯⋯, 0.9012...)T .


For this problem, the values of parameters are computed as {m,l,ν0,ν1} =
{15,2.81,91.48,6.03}.

Example 6  Consider the system of nonlinear equations:


( m
)

xi + 5 − 2 log 1 + xj = 0, i = 1, 2, ..., m.
j=1,j≠i

In particular, we take m = 25. The initial estimate, to obtain the solution

x∗ = (4.2863..., 4.2863..., ⋯⋯, 4.2863...)T ,


is chosen as x(0) = (4, ⋯
25
 , 4)T. The values of parameters in this problem are estimated
as {m,l,ν0,ν1} = {25,2.81,78.31,0.11}.

Example 7  Next we consider another nonlinear integral equation (see [23]):


1

e ∫ 0
t 2
y(t) = + 2tse−y(s) ds,

where t ∈ [0,1]. By discretizing this equation using Trapezoidal rule of integration


with the interval length h = 1/m and weights pi, we get
m
ti ∑ 2
yi = + 2ti pj tj e−yj , i = 1, 2, 3, ..., m,
e j=1

where yi = y(ti) for each i. We take m = 50 for this problem with initial estimate x(0)
= (1, ⋯
50
 , 1)T. The solution of the given problem is,
x∗ = ( 0.0199..., 0.0399..., 0.0599..., 0.0799..., 0.0999..., 0.1199..., 0.1399..., 0.1599..., 0.1799..., 0.1999...,
0.2199..., 0.2399..., 0.2599..., 0.2799..., 0.2999..., 0.3199..., 0.3399..., 0.3599..., 0.3799..., 0.3999...,
0.4199..., 0.4399..., 0.4599..., 0.4799..., 0.4999..., 0.5199..., 0.5399..., 0.5599..., 0.5799..., 0.5999...,
0.6199..., 0.6399..., 0.6599..., 0.6799..., 0.6999..., 0.7199..., 0.7399..., 0.7599..., 0.7799..., 0.7999...,
0.8199..., 0.8399..., 0.8599..., 0.8799..., 0.8999..., 0.9199..., 0.9399..., 0.9599..., 0.9799..., 0.9999...)T .

Here, the concrete values of parameters are {m,l,ν0,ν1} = {50,2.81,91.5,1.85}.

Example 8  Now, we consider a system of 200 nonlinear equations:


m

e−xi − xj = 0, i = 1, 2, ..., 200.
j=1,j≠i
( )T
3 200 3
Here, we set the initial estimate x(0) = 2
, ⋯, 2 to obtain the solution

x∗ = (0.0050..., 0.0050..., ⋯⋯, 0.0050...)T .

13
Numerical Algorithms

In this problem, the computed values of parameters are {m,l,ν0,ν1} =


{200,2.81,90.48,0.45}.

Example 9  At last, we consider a large system of 500 equations:


xi + log(2 + xi + xi+1 ) = 0, i = 1, 2, ..., 499,
and x500 + log(2 + x500 + x1 ) = 0.
( 500 )T
We select the initial estimate x(0) = 10 1 1
, ⋯, 10 for the solution

x∗ = (−0.3149..., −0.3149..., ⋯ ⋯ , −0.3149...)T .


The values of parameters are estimated as {m,l,ν0,ν1} = {500,2.81,78.31,0.0056}.
As already mentioned that all the numerical computations have been performed
in the software package Mathematica using multi-precision arithmetic, in particular
using 4096 digits of accuracy. The program of each method is executed with the
initial estimate given in respect of each example using the criterion ∥x(k) − x(k− 1)∥ +
∥F(x(k))∥ < ­10− 100 to abort the iterations.
For the comparison of proposed method (M1) with the existing methods (Mi,i
∈{2,3,…,10}), Table 2 shows CPU time (in seconds) elapsed during the execution
of program and number of iterations (k) required to converge with the desired accu-
racy to the specified solution in reference to each example. To compare the compu-
tational efficiency, the efficiency index (Ei) is also computed by choosing D = ­10− 5
in all examples. Furthermore, the error norm (∥x(k) − x(k− 1)∥) in three successive iter-
ations, along with the residual error norm (∥F(x(k))∥), are also listed in this table to
show the accuracy of the computed solution. To validate the order of convergence,
the approximate computational order of convergence (ACOC) is computed using the
formula (see [12]),
� � �
ln ‖x(k+1) − x(k) ‖ ‖x(k) − x(k−1) ‖
ACOC = � � �.
ln ‖x(k) − x(k−1) ‖ ‖x(k−1) − x(k−2) ‖

It can be deduced, by analyzing the findings from Table  2, that the proposed
method is computationally efficient comparing to the existing methods, as in major-
ity of the cases it utilizes less CPU time, keeps higher efficiency index and requires
less or equal number of iterations to converge. By computing ACOC, the fifth-order
convergence is also proven numerically. Further, errors in the successive iterations
evidently imply the high degree of precision of the new method, even for the suf-
ficiently large systems.

4.1 Applications to boundary value problems

Here, we examine the efficiency of methods for the solution of following boundary
value problems and display the outcome in respect of: (i) Number of iterations (k), (ii)
Efficiency index (Ei) and (iii) CPU time (in seconds) elapsed.

13
Numerical Algorithms

Table 2  Comparison of performance of methods


Method CPU time Ei k ||x(1) − x(0)|| ||x(2) − x(1)|| ||x(3) − x(2)|| ||F(x(3))|| ACOC

Ex. 1
M1 0.161500 45.0489 3 2.55e − 01 4.82e − 09 1.73e − 44 1.72e − 221 5.00
M2 0.164167 44.9441 3 2.55e − 01 3.64e − 07 1.90e − 35 1.20e − 176 5.00
M3 0.169333 44.8342 3 2.55e − 01 3.59e − 07 1.55e − 35 3.88e − 177 5.00
M4 0.198000 35.8731 3 2.55e − 01 3.98e − 10 3.65e − 51 3.86e − 256 5.00
M5 0.161567 44.6678 3 2.55e − 01 2.19e − 05 1.34e − 25 1.86e − 126 5.00
M6 0.158833 44.4971 3 2.55e − 01 1.92e − 07 9.66e − 37 5.10e − 183 5.00
M7 0.164167 44.3841 3 2.55e − 01 1.44e − 08 1.85e − 42 1.07e − 211 5.00
M8 0.158833 44.4971 3 2.55e − 01 1.93e − 07 6.34e − 37 4.01e − 184 5.00
M9 0.169167 44.7195 3 2.55e − 01 2.19e − 05 1.23e − 25 1.15e − 126 5.00
M10 0.168500 44.9825 3 2.55e − 01 5.56e − 08 4.99e − 39 4.72e − 194 5.00
Ex. 2
M1 0.359167 22.0412 3 8.13e − 01 8.20e − 04 2.04e − 20 2.74e − 103 5.00
M2 0.361033 21.8753 3 8.14e − 01 8.89e − 05 1.53e − 25 3.26e − 129 5.00
M3 0.411900 21.4846 3 8.14e − 01 8.01e − 05 9.03e − 26 2.29e − 130 5.00
M4 0.515400 16.3429 3 8.14e − 01 8.08e − 06 9.04e − 33 2.20e − 167 5.00
M5 0.402533 21.6112 4 8.13e − 01 1.31e − 03 8.69e − 18 1.55e − 88 5.00
M6 0.389100 21.4126 3 8.14e − 01 1.64e − 04 3.36e − 24 1.68e − 122 5.00
M7 0.407967 21.2498 3 8.14e − 01 3.93e − 04 2.57e − 22 4.32e − 113 5.00
M8 0.397800 21.5114 3 8.14e − 01 2.23e − 05 1.02e − 28 2.91e − 145 5.00
M9 0.441233 21.3208 4 8.13e − 01 1.03e − 03 2.61e − 18 3.75e − 91 5.00
M10 0.389667 26.3166 4 8.15e − 01 1.15e − 03 2.19e − 19 7.62e − 98 5.00
Ex. 3
M1 0.452000 15.0969 3 1.30e − 01 3.53e − 13 2.49e − 71 7.36e − 361 5.00
M2 0.458333 14.7836 3 1.30e − 01 1.16e − 13 4.85e − 74 1.04e − 374 5.00
M3 0.510333 13.6001 3 1.30e − 01 1.17e − 13 4.91e − 74 1.10e − 374 5.00
M4 0.505000 13.1945 3 1.30e − 01 5.32e − 16 1.19e − 88 1.13e − 450 5.00
M5 0.453000 14.3936 3 1.30e − 01 4.96e − 11 4.12e − 58 2.78e − 292 5.00
M6 0.474000 14.0748 3 1.30e − 01 4.22e − 16 3.02e − 86 9.63e − 436 5.00
M7 0.458333 13.7970 3 1.30e − 01 1.85e − 13 4.88e − 73 1.06e − 369 5.00
M8 0.473667 14.3053 3 1.30e − 01 5.82e − 14 1.01e − 75 2.69e − 383 5.00
M9 0.510333 13.3405 3 1.30e − 01 4.97e − 11 4.16e − 58 2.93e − 292 5.00
M10 0.507333 11.5897 3 1.30e − 01 7.41e − 13 2.03e − 69 5.35e − 351 5.00
Ex. 4
M1 3.145667 13.3900 3 1.03e − 00 1.70e − 06 6.06e − 35 2.87e − 177 5.00
M2 3.165667 13.1430 3 1.03e − 00 8.26e − 06 2.43e − 31 4.44e − 159 5.00
M3 3.172000 12.1992 3 1.03e − 00 1.27e − 05 3.52e − 30 4.69e − 153 5.00
M4 3.838667 10.2269 3 1.03e − 00 5.27e − 06 1.70e − 32 4.98e − 165 5.00
M5 3.166667 12.8339 3 1.03e − 00 6.62e − 07 3.43e − 38 1.05e − 194 5.00
M6 3.166667 12.5798 3 1.03e − 00 4.30e − 06 3.04e − 33 4.40e − 169 5.00
M7 3.203333 12.3574 3 1.03e − 00 4.20e − 05 8.30e − 28 2.05e − 141 5.00
M8 3.203333 12.7635 3 1.03e − 00 2.30e − 06 2.88e − 35 7.29e − 180 5.00

13
Numerical Algorithms

Table 2  (continued)
Method CPU time Ei k ||x(1) − x(0)|| ||x(2) − x(1)|| ||x(3) − x(2)|| ||F(x(3))|| ACOC

M9 3.187333 11.9899 3 1.03e − 00 9.84e − 08 1.20e − 41 2.70e − 211 5.00
M10 3.138667 13.9175 3 1.03e − 00 1.10e − 04 8.71e − 25 2.26e − 125 5.00
Ex. 5
M1 0.958333 8.10184 3 1.16e − 00 5.05e − 05 7.72e − 27 1.69e − 135 5.00
M2 0.968667 7.89846 3 1.16e − 00 1.94e − 05 3.25e − 29 1.12e − 147 5.00
M3 0.994667 6.87996 3 1.16e − 00 6.00e − 05 2.77e − 26 1.53e − 132 5.00
M4 1.187333 5.93794 3 1.16e − 00 4.51e − 05 4.40e − 27 1.02e − 136 5.00
M5 0.984333 7.66700 3 1.16e − 00 5.11e − 05 7.72e − 27 1.59e − 135 5.00
M6 0.979000 7.47034 3 1.16e − 00 6.90e − 05 5.54e − 26 4.85e − 131 5.00
M7 1.271000 7.29492 3 1.16e − 00 1.84e − 05 2.49e − 29 2.99e − 148 5.00
M8 0.973667 7.62934 3 1.16e − 00 3.84e − 05 1.54e − 27 4.12e − 139 5.00
M9 0.994667 6.73090 3 1.16e − 00 1.54e − 05 3.85e − 30 9.68e − 153 5.00
M10 0.950333 9.21534 3 1.16e − 00 3.25e − 04 7.78e − 22 1.59e − 109 5.00
Ex. 6
M1 1.562667 4.12262 3 1.43e − 00 1.09e − 05 1.73e − 31 9.06e − 161 5.00
M2 1.614670 3.97713 3 1.43e − 00 2.10e − 06 9.90e − 36 1.23e − 182 5.00
M3 2.010667 2.99759 3 1.43e − 00 6.82e − 07 1.03e − 38 4.27e − 198 5.00
M4 2.104333 2.90493 3 1.43e − 00 2.79e − 06 5.22e − 35 6.46e − 179 5.00
M5 1.604000 3.82578 3 1.43e − 00 1.28e − 05 4.68e − 31 1.64e − 158 5.00
M6 1.646000 3.69434 3 1.43e − 00 8.06e − 06 2.93e − 32 1.01e − 164 5.00
M7 3.432330 3.57621 3 1.43e − 00 2.00e − 06 7.74e − 36 3.60e − 183 5.00
M8 1.651000 3.81014 3 1.43e − 00 4.37e − 06 8.06e − 34 9.26e − 173 5.00
M9 2.036333 2.91934 3 1.43e − 00 8.21e − 06 3.47e − 32 2.53e − 164 5.00
M10 1.451333 5.07576 3 1.43e − 00 1.51e − 05 9.63e − 31 5.38e − 157 5.00
Ex. 7
M1 80.2435 6.271e − 1 3 4.02e − 00 7.37e − 06 2.33e − 32 1.14e − 164 5.00
M2 80.2890 6.134e − 1 3 4.02e − 00 1.94e − 05 1.44e − 30 5.02e − 156 5.00
M3 83.1950 4.429e − 1 3 4.02e − 00 1.93e − 05 1.37e − 30 3.79e − 156 5.00
M4 100.1565 4.235e − 1 3 4.02e − 00 1.60e − 06 1.37e − 37 9.75e − 193 5.00
M5 80.6330 5.995e − 1 3 4.02e − 00 1.39e − 03 8.65e − 20 1.25e − 100 5.00
M6 80.7735 5.866e − 1 3 4.02e − 00 7.26e − 05 1.11e − 27 1.41e − 141 5.00
M7 81.4765 5.746e − 1 3 4.02e − 00 1.45e − 05 3.32e − 31 3.25e − 159 5.00
M8 80.3515 5.987e − 1 3 4.02e − 00 6.02e − 05 2.88e − 28 1.10e − 144 5.00
M9 84.2890 4.360e − 1 3 4.02e − 00 1.32e − 03 6.62e − 20 3.23e − 101 5.00
M10 80.0155 9.997e − 1 3 4.02e − 00 8.62e − 05 9.53e − 27 2.41e − 136 5.00
Ex. 8
M1 160.609 1.257e − 2 3 2.11e + 01 1.03e − 07 2.90e − 48 1.05e − 248 5.00
M2 178.648 1.248e − 2 3 2.11e + 01 1.25e − 07 3.94e − 48 2.44e − 248 5.00
M3 177.469 8.420e − 3 3 2.11e + 01 1.26e − 07 4.19e − 48 3.35e − 248 5.00
M4 194.172 8.361e − 3 3 2.11e + 01 1.02e − 08 2.10e − 55 1.55e − 286 5.00
M5 164.485 1.239e − 2 3 2.11e + 01 6.49e − 06 7.07e − 38 2.17e − 195 5.00
M6 208.375 1.231e − 2 3 2.11e + 01 6.79e − 07 1.80e − 44 4.67e − 230 5.00

13
Numerical Algorithms

Table 2  (continued)
Method CPU time Ei k ||x(1) − x(0)|| ||x(2) − x(1)|| ||x(3) − x(2)|| ||F(x(3))|| ACOC

M7 411.750 1.222e − 2 3 2.11e + 01 5.40e − 08 5.89e − 50 1.82e − 257 5.00
M8 203.843 1.239e − 2 3 2.11e + 01 6.94e − 07 1.35e − 44 7.40e − 231 5.00
M9 185.093 8.380e − 3 3 2.11e + 01 6.51e − 06 7.22e − 38 2.44e − 195 5.00
M10 145.240 2.413e − 2 3 2.11e + 01 2.20e − 7 2.70e − 46 1.52e − 238 5.00
Ex. 9
M1 18.4060 8.285e − 4 3 9.28e − 00 3.56e − 03 3.96e − 20 1.67e − 104 5.00
M2 24.0050 8.260e − 4 4 9.27e − 00 1.07e − 02 4.46e − 17 1.37e − 88 5.00
M3 29.6325 5.525e − 4 4 9.26e − 00 2.20e − 02 3.11e − 15 4.25e − 79 5.00
M4 30.3205 5.514e − 4 4 9.27e − 00 9.59e − 03 2.28e − 17 4.23e − 90 5.00
M5 19.4610 8.236e − 4 3 9.28e − 00 2.26e − 04 2.31e − 26 6.35e − 136 5.00
M6 24.1095 8.212e − 4 4 9.27e − 00 4.79e − 03 6.22e − 19 5.64e − 98 5.00
M7 24.2425 8.187e − 4 4 9.29e − 00 7.61e − 03 8.08e − 18 2.67e − 92 5.00
M8 18.9765 8.236e − 4 3 9.28e − 00 1.84e − 03 2.14e − 21 1.13e − 110 5.00
M9 23.8200 5.514e − 4 3 9.28e − 00 1.27e − 03 2.84e − 22 3.83e − 115 5.00
M10 20.1005 1.634e − 3 4 9.20e − 00 8.00e − 02 1.05e − 11 1.03e − 60 5.00

4.1.1 Integro‑differential equation

The mathematical model, describing the process of penetration of electromagnetic


field into a substance, is represented by an integro-differential equation [24] which
is given by
[ ] 2
= 𝜙(x, t), 0 ≤ x ≤ 1, t ≥ 0,
t 1( )
𝜕u 2
� 0 � 0 𝜕x
𝜕u 𝜕 u
− 1+ dxd𝜏 (4.3)
𝜕t 𝜕x2

where 𝜙(x, t) = 18 (9 − 3e−2 − (1 − 3e−2 )e−2t )(x2 − 5x + 4)e−x−t − x(1 − x)e−x−t and u
= u(x,t) satisfies the following boundary conditions: u(0,t) = u(1,t) = 0, along with
the initial condition: u(x,0) = x(1 − x)e−x. By considering the domain D = {(x,t) |
(x,t) ∈ [0,1] × [0,1]}, we intend to transform (4.3) into a finite dimensional problem
with partitioning of D as
xi = x0 + ih, i = 1, 2 … , p, h = 1∕p,

tj = t0 + js, j = 1, 2 … , q, s = 1∕q,

where x0 = t0 = 0 and xp = tq = 1. Denoting u(xi,tj) = ui,j for each i and j, the given
boundary and initial conditions are transformed as: u0,j = up,j = 0 and ui,0 = ih(1 −
ih)e−ih. Further, at any point (xi,tj), approximating the partial derivatives in (4.3) by
using divided differences as

𝜕u ui,j − ui−1,j 𝜕u ui,j − ui,j−1 𝜕 2 u ui+1,j − 2ui,j + ui−1,j


= , = , and 2 = ,
𝜕x h 𝜕t s 𝜕x h2

13
Numerical Algorithms

and approximating the double integral as


tj 1 p j
∑ ∑
∫ 0∫
f (x, 𝜏)dxd𝜏 = hs f (xn , tl ), for each j = 1, 2, … , q,
0 n=1 l=1

the following system of nonlinear equations in (p − 1) × q variables is obtained,


[ p j
]
1 1 s ∑∑ 2
(u − ui,j−1 ) − 2 1 + (u − un−1,l ) (ui+1,j − 2ui,j + ui−1,j ) − 𝜙(xi , tj ) = 0,
s i,j h h n=1 l=1 n,l
(4.4)
where i = 1,2,…,p − 1 and j = 1,2,…,q. In particular, for p = 11 and q = 10, the above
system reduces to a 10 × 10 nonlinear equations, for which the approximate numeri-
cal solution is plotted in Fig. 2. To obtain that solution and to further compare the
100
performance of methods, the initial approximation is taken as ( 34 , ⋯, 34 )T  . The com-
parison of performance is depicted in Table  3. Note that, the estimated values of
parameters for the considered problem are, {m,l,ν0,ν1} = {100,2.81,63.50,6.19}.

4.1.2 Fisher’s equation

Consider a particular case of the Fisher’s equation [25], which models the population
growth in reaction-diffusion system and it is expressed as

𝜕u 𝜕2 u
= 𝛿 2 + u(1 − u), (4.5)
𝜕t 𝜕x
where 𝛿 is the diffusion coefficient. The boundary conditions on u(x,t) are imposed
as: 𝜕u
𝜕x
(0, t) = 0 and 𝜕u
𝜕x
(1, t) = 0 for all t ≥ 0, along with the initial condition as:
u(x, 0) = 23 + 12 cos(𝜋x) for 0 ≤ x ≤ 1. Consider the domain D = {(x,t) | (x,t) ∈ [0,1] ×
[0,1]} with the partition of step size h = 1/p and s = 1/q, respectively for the domains

Fig. 2  Approximate numerical
solution of integro-differential
equation

13

13
Table 3  Comparison of performance of methods for boundary value problems
Method M1 M2 M3 M4 M5 M6 M7 M8 M9 M10

Integro-differential equation
k 4 4 4 4 4 4 4 4 4 5
Ei 0.0830 0.0821 0.0585 0.0552 0.0811 0.0802 0.0792 0.0811 0.0581 0.154
CPU time 17.521 17.933 24.034 25.142 18.426 18.170 18.381 18.089 26.424 17.595
Fisher’s equation
k 5 5 5 5 5 5 5 5 5 6
Ei 0.00162 0.00161 0.00108 0.00107 0.00160 0.00160 0.00159 0.00160 0.00107 0.00318
CPU time 15.333 16.616 24.415 24.471 17.987 16.996 17.828 16.910 23.374 13.427
Numerical Algorithms
Numerical Algorithms

Fig. 3  Approximate numerical
solution of Fisher’s equation

x and t, i.e., xi = 0 + ih and tj = 0 + js for each i = 1,2,…,p and j = 1,2,…,q. Denoting
u(xi,tj) = ui,j for each i, j and further approximating (4.5) at any point (xi,tj) using the
divided differences,

𝜕u ui,j − ui,j−1 𝜕 2 u ui+1,j − 2ui,j + ui−1,j 𝜕u ui+1,j − ui,j


= , 2
= 2
, and = ,
𝜕t s 𝜕x h 𝜕x h
the given equation is transformed into the system of (p − 1) × q equations. Selecting
p = 21, q = 20 and taking 𝛿 = 1 in particular, the approximate numerical solution of
400
the system is plotted in Fig. 3. Starting with the approximation ( 12 , ⋯, 21 )T  , the per-
formance of methods is compared in Table 3. The estimated values of parameters for
this problem are, {m,l,ν0,ν1} = {400,2.81,3.9,0.0025}.
The results in Table  3 clearly show that the proposed method is computation-
ally more efficient than existing methods for solving the considered boundary value
problems, with the only exception being the high efficiency index of the method
M10, although it uses higher number of iterations to converge.

5 Conclusion

The proposed two-step fifth-order iterative method to solve nonlinear systems uti-
lizes two function evaluations, two Jacobian evaluations and two matrix inversions.
We have shown the fifth order of convergence under some specified theoretical con-
ditions. The main idea to develop new method is to use the product of a row vector
function with a column vector function so as to produce a scalar function, due to
which there is a significant reduction in the computational cost. This weight factor
is precisely supported by the comparison of efficiency indices of existing fifth-order
methods with the new method. Further, numerical experimentation has been carried

13
Numerical Algorithms

out on the selected set of nonlinear system of equations to compare the performance
with the existing methods under the similar environment and using high precision
arithmetic. It is found that the new developed method, in general, outperform other
methods in terms of robustness, accuracy and computational efficiency. Moreover,
the calculated value of approximate computational order of convergence supports
the theoretically proven fifth order of convergence.

Data availability  Data sharing not applicable to this article as no datasets were generated or analyzed dur-
ing the current study.

Declarations 

Conflict of interest  The authors declare no competing interests.

References
1. Ostrowski, A. M.: Solution of Equation and Systems of Equations. Academic Press, New York
(1960)
2. Ortega, J. M., Rheinboldt, W. C.: Iterative Solution of Nonlinear Equations in Several Variables.
Academic Press, New York (1970)
3. Traub, J. F.: Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New
York (1982)
4. McNamee, J. M.: Numerical Methods for Roots of Polynomials Part, vol. I. Elsevier, Amsterdam
(2007)
5. Grau-Sánchez, M., Grau, À. , Noguera, M.: On the computational efficiency index and some itera-
tive methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 236, 1259–1266
(2011)
6. Cordero, A., Hueso, J. L., Martínez, E., Torregrosa, J.R.: Increasing the convergence order of an
iterative method for nonlinear systems. Appl. Math. Lett. 25, 2369–2374 (2012)
7. Xu, Z., Jieqing, T.: The fifth order of three-step iterative methods for solving systems of nonlinear
equations. Math. Numer. Sin. 35, 297–304 (2013)
8. Sharma, J. R., Gupta, P.: An efficient fifth order method for solving systems of nonlinear equations.
Comput. Math. Appl. 67, 591–601 (2014)
9. Xiao, X. Y., Yin, H.: A new class of methods with higher order of convergence for solving systems
of nonlinear equations. Appl. Math. Comput. 264, 300–309 (2015)
10. Choubey, N., Jaiswal, J. P.: Improving the order of convergence and efficiency index of an iterative
method for nonlinear systems. Proc. Natl. Acad. Sci. India A - Phys. Sci. 86, 221–227 (2016)
11. Sharma, J. R., Guha, R. K.: Simple yet efficient Newton-like method for systems of nonlinear equa-
tions. Calcolo 53, 451–473 (2016)
12. Cordero, A., Gómez, E., Torregrosa, J. R.: Efficient high-order iterative methods for solving nonlin-
ear systems and their application on heat conduction problems. Complexity 2017, 6457532 (2017)
13. Xiao, X. Y., Yin, H.: Accelerating the convergence speed of iterative methods for solving nonlinear
systems. Appl. Math. Comput. 333, 8–19 (2018) ..
14. Sharma, R., Sharma, J. R., Kalra, N.: A Modified Newton–Ozban composition for solving nonlinear
systems. Int. J. Comput. Methods 17, 1950047 (2020)
15. Wu, S., Wang, H.: A modified Newton-like method for nonlinear equations. Comput. Appl. Math.
39, 1–18 (2020)
16. Solaiman, O. S., Hashim, I.: An iterative scheme of arbitrary odd order and its basins of attraction
for nonlinear systems. Comput. Mater. Contin. 66, 1427–1444 (2021)

13
Numerical Algorithms

17. Kansal, M., Cordero, A., Bhalla, S., Torregrosa, J. R.: New fourth-and sixth-order classes of itera-
tive methods for solving systems of nonlinear equations and their stability analysis. Numer. Algo-
rithms 87, 1017–1060 (2021)
18. Behl, R., Bhalla, S., Magreñán, A. ́A., Kumar, S.: An efficient high order iterative scheme for large
nonlinear systems with dynamics. J. Comput. Appl. Math. 404, 113249 (2022)
19. Arroyo, V., Cordero, A., Torregrosa, J. R.: Approximation of artificial satellites’ preliminary orbits:
The efficiency challenge. Math. Comput. Model. 54, 1802–1807 (2011)
20. Brent, R. P.: Some efficient algorithms for solving systems of nonlinear equations. SIAM J. Numer.
Anal. 10, 327–344 (1973)
21. Wolfram, S.: The Mathematica Book. Wolfram Media, 5th. edn (2003)
22. Chandrasekhar, S.: Radiative Transfers. Dover, New York (1960)
23. Hueso, J. L., Martínez, E., Teruel, C.: Convergence, efficiency and dynamics of new fourth and
sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 275, 412–
420 (2015)
24. Singh, H., Sharma, J. R.: Reduced cost numerical methods of sixth-order convergence for systems
of nonlinear models. Rev. Real Acad. Cienc. Exactas Fis. Nat. - A: Mat. 116, 1–24 (2022)
25. Narang, M., Bhatia, S., Alshomrani, A. S., Kanwar, V.: General efficient class of Steffensen type
methods with memory for solving systems of nonlinear equations. J. Comput. Appl. Math. 352,
23–39 (2019)

Publisher’s note  Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the
author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article
is solely governed by the terms of such publishing agreement and applicable law.

13

You might also like