Robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays

https://doi.org/10.1016/j.chaos.2012.01.011Get rights and content

Abstract

This paper investigates robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays. The parameter uncertainties are assumed to be bounded in given compact sets. The delay is assumed to be time-varying and belong to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. Based on the new Lyapunov–Krasovskii functional (LKF), some inequality techniques and stochastic stability theory, new delay-dependent stability criteria have been obtained in terms of linear matrix inequalities (LMIs). Finally, two numerical examples are given to illustrate the less conservative and effectiveness of our theoretical results.

Highlights

► Robust stability analysis for Markovian jumping interval neural networks is considered. ► Both linear fractional and interval uncertainties are considered. ► A new LKF is constructed with triple integral terms. ► MATLAB LMI control toolbox is used to validate theoretical results. ► Numerical examples are given to illustrate the effectiveness of the proposed method.

Introduction

During the last decade, neural networks have been extensively studied for their successful applications in signal processing, pattern recognition, static image processing, associative memory and combinatorial optimization [1], [2]. Hopfield [3] realized that in hardware implementation, time delays occur due to finite switching speed of the amplifiers. He has modelled a continuous time-dynamical neural networks containing n dynamic neural units (DNUs) by an analog RC (resistance capacitance) network circuit. Further, a time delay was introduced in the above model by Marcus and Westervelt [4] for Hofield neural networks to describe dynamics equationdx(t)dt=-Bx(t)+Aσ(x(t-τ))+J,where x = [x1,  , xn]T, B = diag(b1,  , bn) with bi=1RiCi, the n × n irreducible connection matrix A = (aij) in which aij=wijCi, σ(x) = [σ1(x1),  , σn(xn)]T and J = [J1,  , Jn]T with Ji=siCi for i, j = 1, 2,  , n. Some results on the dynamical behaviors have been reported for delayed neural networks in [5], [6], [7]. Delay-dependent methods make use of information on the length of delays and more works for delayed neural networks have been focused on the stability analysis, and a large amount of results have been available in the literature, see, for example [8], [9], [10], [11], [12], [13], [14]. However, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus, there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. In these circumstances the signal propagation is not instantaneous and cannot be modelled with constant delays or discrete delays. In this case, it is more appropriate that continuously distributed delays are incorporated in neural network models (see [25] and the references therein) (see Fig. 1, Fig. 2, Fig. 3, Fig. 4).

Moreover, because of unavoidable factors, such as modeling error, external perturbation and parameter fluctuation, the neural networks model certainly involve uncertainties such as perturbations and component variations, which will change the stability of neural networks. To analyze uncertainty of neural networks, one reasonable method is to assume parameters in certain intervals. Recently, much attention has been paid to the robust questions of interval neural networks see for example [15], [16], [17], [18]. Also a new type of uncertainty namely linear fractional form, is considered in this paper which can include the norm bounded uncertainties as a special case [19], [20], [21], [22].

In practice, sometimes a neural network has finite state representations (also called modes, patterns, or clusters), and modes may switch (or jump) from one to another at different times and it is shown the switching (or jumping) between different neural networks modes can be governed by a Markovian chain. Such kind of neural networks are of great significance [23], [24], [25], [26], [27], [28].

Based on the above discussions, the problem of robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays has been studied. The main purpose of this paper is to study the robust stability analysis for Markovian jumping interval neural networks with constraint that lower and upper bounds of the delay interval are assumed to be known. Two types of uncertainties, known as linear fractional uncertainty and interval uncertainty have been discussed. To the best of our knowledge, the stability of Markovian jumping neural networks for both linear fractional uncertainty and interval uncertainty has not been investigated yet, which is very important in both theories and applications and this motivates our research works. By constructing a new Lyapunov–Krasovskii functional with triple integral terms and employing some analysis techniques, sufficient conditions are derived for the considered neural networks in terms of LMIs, which can be easily calculated by MATLAB LMI control toolbox. Numerical examples are given to illustrate the effectiveness of the proposed method.

Notations: Throughout this paper, Rn and Rn×n denote, respectively, the n-dimensional Euclidean space and the set of all n × n real matrices. The superscript T denotes the transposition and the notation X  Y (respectively, X > Y), where X and Y are symmetric matrices, means that X  Y is positive semi-definite (respectively, positive definite). In is the n × n identity matrix. ∣ · ∣ is the Euclidean norm in Rn. Moreover, let (Ω,F,P) be a complete probability space with a filtration {Ft}t0 satisfying the usual conditions. That is the filtration contains all P-null sets and is right continuous. The notation ∗ always denotes the symmetric block in one symmetric matrix. Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise.

Section snippets

Problem description and preliminaries

Consider the following Markovian jumping Hopfield neural network with time-varying delays described byy˙m(t)=-am(ηt)ym(t)+n=1pwmn1(ηt)gn(yn(t))+n=1pwmn2(ηt)gn(yn(t-τ(t)))+n=1pwmn3(ηt)t-σ(t)tgn(yn(s))ds+Im,m,n=1,2,,P,where ym(t) is the activation of the mth neuron, gn(·) denotes the signal function of the nth neuron; Im denotes the external inputs at time t; am(ηt) is positive number; and it denotes the charging time constant or passive decay rate of the mth neuron; wmn1(ηt), wmn2(ηt) and w

Main results

In this section, we derive a new delay-dependent criterion for asymptotic stability of the system (5) using the Lyapunov functional method combining with linear matrix inequality approach. Now we state and prove the following Theorem 3.1 without uncertainties.

Theorem 3.1

For given scalars h2 > h1  0, σM and μ, the equilibrium solution of neural networks (5) is globally asymptotically stable in the mean square, if there exist positive definite matrices Pi=PiT>0,Rl=RlT>0(l=1,2,3,4),Qk=QkT>0(k=1,2,3,4,5), U1=U1T>

Interval uncertainty

In this section, we consider the following delayed neural networks (5) described byx˙(t)=-Aix(t)+W1if(x(t))+W2if(x(t-τ(t)))+W3it-σ(t)tf(x(s))ds.In the practical implementation of neural networks, values of constants and weight coefficients depend on the resistance and capacitance values which are subject to uncertainties. This may lead to some deviations in the values of aki, w1kli, w2kli and w3kli. Hence, it is important to ensure the global robust stability of the designed network against

Numerical examples

In this section, we will provide numerical examples to show the effectiveness of the proposed methods.

Example 5.1

Consider the neural networks with Markovian jumping parameters (23) as followsA1=3.4888003.2684,A2=2.4898000.0211,W21=-0.8620-1.2919-0.6841-2.0729,W22=-2.83060.4978-0.8436-1.0115,W31=0.5-0.50.20.7,W32=0.30.2-0.50.4,Γ=-223-3with activation function f(x(t)) = tanh(t) and assumption (H1) is satisfied with L = diag{1, 1}. In case of W11 = W12 = 0, the system (23) is similar to system (8) of [25]. Using

Conclusion

In this paper, we have dealt with the problem of robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays. Two types of uncertainties, known as linear fractional uncertainty and interval uncertainty have been discussed. Stability analysis of Markovian jumping neural networks with linear fractional uncertainty is new in the literature. By employing a combination of new LKF, and inequality technique, new delay-dependent criteria

Acknowledgement

The authors are very much thankful to referees for their valuable comments and suggestions for improving this manuscript.

References (33)

Cited by (31)

  • Robust reliable H<inf>∞</inf> control for neural networks with mixed time delays

    2016, Chaos, Solitons and Fractals
    Citation Excerpt :

    However, in this paper we consider the faulty actuators as a disturbance signal to the system, which is augmented with system disturbance input. During the past decades, a great deal of effort has been done to the stability analysis of the dynamical systems[19–26]. It is noted that the time-delay phenomenon exists widely in engineering and physical systems, which may cause instability or bad system performance in control systems.

View all citing articles on Scopus

The work of authors was supported by Department of Science and Technology, New Delhi India under the sanctioned No. SR/S4/MS:485/07.

View full text