On stability criteria for neural networks with time-varying delay using Wirtinger-based multiple integral inequality

https://doi.org/10.1016/j.jfranklin.2015.08.024Get rights and content

Abstract

This paper investigates the problem of delay-dependent stability analysis of neural networks with time-varying delay. Based on Wirtinger-based integral inequality which suggests very closed lower bound of Jensen's inequality, a new Wirtinger-based multiple integral inequality is presented and it is applied to time-varying delayed neural networks by using reciprocally convex combination approach of high order cases. Three numerical examples are given to describe the less conservatism of the proposed methods.

Introduction

Neural networks have substantial capacity in many scientific areas such as signal processing, image decryption, pattern recognition, associative memories, fixed-point computations, optimization, feedback control, medical diagnosis, and financial applications [1], [2], [3]. It is well-known that the existence of time-delays often causes undesirable dynamic behavior, deteriorates system performance, and destroys stability. The occurrence of time-delay in neural networks cannot be avoided, due to the finite switching speed of amplifiers, the inherent communication time between the neurons, and so on. Thus, many researchers have paid their attention to analyze the stability of time-delayed neural networks [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. Until now, in order to reduce the conservatism of stability criteria, various methods are developed such as Park's inequality [20], multiple integral approach [21], model transformation [22], free-weighting matrices techniques [23], [24], convex combination technique [25], reciprocally convex optimization [26], and delay-partitioning approach [8], [9], [10], [27], [28].

It should be pointed out that, in the stability analysis on time-delayed neural networks based on Lyapunov approaches, Jensen's inequality is an essential technique to reduce conservatism of stability criteria, so it has been utilized for estimating upper bound of time derivative of constructed Lyapunov functional in most existing papers. It is clear that reducing Jensen's gap gives more feasible region of the stability of time-delay systems. However, until now, the following Jensen's gap is commonly used: JG1(x)=abxT(s)Wx(s)ds1baabxT(s)dsWabx(s)ds0,where W>0, a and b are positive constants satisfying b>a. Only recently, there are a few works [29], [30], [31] to reduce Jensen's gap. In [29], [30], based on Wirtinger's inequalities, more closed lower bound of Jensen's inequality is proposed as JG2(x)=JG1(x)π24νT(a,b)Wν(a,b)0,where ν(a,b)=abx(s)ds2baabavx(s)dsdv, and in order to apply Wirtinger inequalities which have some necessary assumptions, the choice of a particular signal has been proved, in which, an integration of states, 1baabavx(s)dsdv, is involved in augmented vectors to make use of further information of the time-delay systems. Moreover, very recently, an improved result which reduces more Jensen's gap as JG3(x)=JG1(x)3νT(a,b)Wν(a,b)0has been presented in [31]. From the above, it is clear that JG3(x) JG2(x)JG1(x), so the use of JG3(x) gives more feasible region than the use of JG2(x) and JG1(x).

After presented Wirtinger-based integral approach in [31], many researchers have used it for the less conservative results on time-delay criteria [32], [33], [34], [35], [36], [37] In [34], the problem of stability analysis for neutral type neural networks with discrete and distributed delays was investigated by using the multiple integral, Wirtinger-based integral, and delay fragmentation approach. In [35], a criterion for the stability of recurrent neural networks with time-varying delay was derived by employing a triple integral term as an augmented Lyapunov–Krasovskii functional, and in order to derive the criterion, Wirtinger-based integral inequality and two zero-value free-weighting matrix equations are used. The authors in [36] studied the robust stochastic stability problem for a class of neutral-type uncertain neural networks with Markovian jumping parameters and time-varying delays in which the delay partitioning and Wirtinger-based integral inequality techniques were used to construct augmented Lyapunov–Krasovskii functional including several triple integral terms. Shao et al. [37] have proposed an H filter design result which is mode- and delay-dependent for static neural networks with Markovian jumping parameters and time-varying delay, here, both triple integral terms in Lyapunov functional and Wirtinger-based integral approach were utilized as well. As seen in the above existing literature, however, Wirtinger-based integral approach was applied to only single integral term without any modifications. In those works, both double or multiple integral and Wirtinger-based integral approaches are employed together but used them individually, i.e. double or multiple integral terms were dealt with by Jensen's inequality approach even though Wirtinger-based integral approach gives more tighter bound than Jensen's one.

On the other hand, there are a few attempts to extend Wirtinger-based integral inequality recently. An attempt was made to combine it with delay fragmentation approach for time-delay systems in [38], in which every partitioned single integral terms were applied to Wirtinger-based integral approach, and less conservative results of the developed criterion were given by comparing other works. Authors of [29], [30], [31] proposed more general inequality to reduce Jensen's gap by using Bessel–Legendre inequality for the constant delay case [39]. Very recently, there was a work which is an extension of Wirtinger-based integral inequality to a double integral term [40].

Inspired by the above discussion, this paper mainly focuses on presenting a novel approach called Wirtinger-based multiple integral inequality for neural networks with time-varying delay. Based on a lemma in [40], a novel inequality which is multiple integral form of Wirtinger-based integral inequality is supposed in Lemma 5. In addition, in order to apply the derived new inequality to time-varying delayed neural networks, the reciprocally convex combination approach is generalized for high order case. Numerical examples are given to illustrate that the proposed methods are effective and lead to less conservative results.

Notations: Rn is the n-dimensional Euclidean space, X>0 (respectively, X 0) means that the matrix X is a real symmetric positive definite matrix (respectively, positive semi-definite). in a matrix represents the elements below the main diagonal of a symmetric matrix. In and 0n×m denote the n×n and n×m dimensional identity and zero matrix, respectively. diag{} denotes block diagonal matrix. Sym{X} indicates X+XT. X[f(t)]Rm×n means that the elements of the matrix X include the values of f(t). For XRm×n, X denotes a basis for the null-space of X. ij=i!j!(ij)! indicates binomial coefficient.

Section snippets

Preliminaries

Consider the following time-varying delayed neural networks:ẋ(t)=Ax(t)+B1f(x(t))+B2f(x(th(t))),x(t)=ϕ(t),t[h0]where x(t)Rn is the neuron state vector, n is the number of neurons in a neural network, f(x(t))Rn means the neuron activation function, A=diag{ai}Rn×n is a positive diagonal matrix, B1=(bij1)n×nRn×n and B2=(bij2)n×nRn×n are the interconnection matrices representing the weight coefficients of the neurons. The delay h(t) is a time-varying continuous function satisfying 0h(t)h

Main results

Before proceeding further, eiR(7+2M)n×n(i=1,2,,7+2M) are defined as block entry matrices, e.g. e2T=[0In005+2M].

Theorem 1

For given positive scalars h and M2, any scalar μ, and diagonal matrices Lp=diag{L1+,,Ln+} and Lm=diag{L1,,Ln}, the system (1) is asymptotically stable, if there exist positive definite matrices PR(1+M)n×(1+M)n, Qi=R2n×2n (i=1,2), Ri=Rn×n (i=1,,M), positive diagonal matrices Hi=diag{hi1,…,hin}(i=1,2,3), Λi=diag{ki1,,kin} (i=1,2), and any matrices TijR2n×2n (i=1, …, M, j

Numerical example

In this section, three numerical examples are provided to show the improvement of the stability criteria. In the simulations, the results of unknown μ can be obtained by Theorem 1 with Q1=0.

Example 1

Consider the neural networks (1) with the following parameters: A=[2002],B1=[1111],B2=[0.88111],Lm=diag{0,0},Lp=diag{0.4,0.8}.With the above parameters, Table 1 is given by applying Theorem 1, which shows the results of maximum bound of time-delay with various μ.

Example 2

Consider the neural networks (1) with the

Conclusions

In this paper, the problem of delay-dependent stability analysis of neural networks with time-varying delay has been discussed. Based on Lemma 4 which is the Wirtinger-based integral inequality for a double integral term, a new lemma, Wirtinger-based multiple integral inequality, has been proposed in Lemma 5. By generalizing lower bound lemma for high order case, a new inequality, Lemma 5, can be applied to neural networks with time-varying delay. Three numerical examples have been given to

Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2A10005201).

References (42)

Cited by (81)

  • Stability analysis for delayed neural networks based on the augmented Lyapunov-Krasovskii functional with delay-product-type and multiple integral terms

    2020, Neurocomputing
    Citation Excerpt :

    However, the existence of time delay may lead to instability or other unacceptable dynamic behaviors [11,12]. Thus, stability analysis of delayed neural networks (DNNs) has received extensive attention [13–20]. Lyapunov method is one of the most fruitful fields in the stability analysis of time-delay systems.

View all citing articles on Scopus
View full text