1 Introduction

In past two decades, there has been increasing research interests in analyzing the dynamic behaviors of neural networks due to their extensive applications such as signal processing, pattern recognition, engineering optimization, and associative memory [13]. These applications ensure the global exponential or asymptotic stability of designed neural networks. It is well believed that the inherent time delays may cause the oscillation and instability in many dynamical networks. On the other hand, as recommended by [4], the time delay is purposely introduced into the model of neural networks, which is more effective for some engineering applications such as speed detection of moving objects and processing of moving images. In general, the time delays can be usually categorized as constant delays, time-varying delays, and distributed delays. Therefore, the stability problems for delayed neural networks have been a growing research interest in recent years (for example, refer to [510]).

Recently, the state estimation problem of delayed neural networks has motivated an enormous deal of interest, and considerable research efforts have been made on this fruitful topic (see, for example, [1125]). As is well known, a neural network is a highly interconnected network with a large number of neurons. In particular, a neural network is designed to solve a complex nonlinear problem. Substantial connections between neurons are often required for the neural networks to handle complicated nonlinear problems. Accordingly, in large-scale neural networks, it may be very difficult and expensive (or even impossible) to acquire the complete information of all neuron states. Furthermore, in many practical applications, one needs to acquire the information of neuron states and make use of them to achieve certain objectives such as system modeling and state feedback control. For instance, a recurrent neural network was presented in [26] to model an unknown nonlinear system, and the neuron states were utilized to implement a control law. The objective of state estimation for delayed neural networks is to estimate neuron states by using the observed network measurements. Further, the state estimation problem of delayed neural networks was initially studied in [11]. The design of the state estimator for uncertain neural networks via the integral-inequality method has been investigated in [12]. Recently, the authors in [13] proposed a scaling parameter approach to discuss the state estimation problem of neural networks with time-varying delay. Also, state estimation of recurrent neural networks with time-varying delay by using the delay partition approach have been introduced in [14].

On the other hand, the digital controller has been popularly utilized for controlling complex dynamical systems in industry [27] including neural networks [28]. In order to improve the intersampling performance, hybrid system models with both continuous and discrete-time signals are generally built through a zero-order hold (ZOH), which is also named as sampled-data systems [29]. In such systems, the control signals are kept constant during the sampling period, and cannot be changed to deal with the nonlinearity of the plant [30]. This remarkable characteristic makes the analysis and design more difficult and complex. Further, two main approaches have been used for the sampled-data control of linear systems leading to conditions in terms of LMIs. The first one is the input delay approach, where the system is modeled as a continuous-time system with the delayed control input. The second approach is based on the representation of the sampled-data system in the form of the impulsive model. These two approaches have been elaborately discussed in [31].

Additionally, the stabilization problem for sampled-data neural-network-based control systems was considered in [30]. The authors in [32] dealt with the problem of sampled-data fuzzy controller for time-delay nonlinear systems by the fuzzy-model-based LMI approach. In [31], sampled-data control of linear systems under uncertain sampling with the known upper bound on the sampling intervals has been considered. The problem of synchronization of neural networks with time-varying delays based on sampled-data control have been investigated in [33, 34]. Wirtinger’s inequality and Lyapunov-based sampled-data stabilization was proposed by [35]. Also, a discontinuous Lyapunov function method was introduced by using impulsive system representation of the sampled-data systems (see, for example, [29, 35]). In [36], it was further studied discontinuous Lyapunov functional approach to synchronization of time-delay neural networks using sampled-data. Very recently, the problem of exponential state estimation for delayed recurrent neural networks with sampled-data has been discussed in [37]. However, the sampled-data network measurement in the presence of a constant input delay has not been discussed in [37]. For the state estimation of neural networks, it would be of importance to study how the effect of sampling data on the estimation performance is and derive the conditions depending on the sampling data to guarantee some certain estimation performances based on the extended Wirtinger inequality and discontinuous Lyapunov functional approach. To the best of authors’ knowledge, no related results have been established for the state estimation of delayed neural networks under a sampled-data network measurement in the presence of a constant input delay. Therefore, this is first attempt to deal with the state estimation problem for neural networks with time-varying delay under a sampled-data network measurement in the presence of a constant input delay based on discontinuous Lyapunov functional approach and convex combination technique.

Based on the above discussions, in this paper, the problem of state estimation of delayed neural networks with sampling data is considered. By constructing appropriate Lyapunov–Krasovskii functional involving discontinuous terms, introducing free-weighting matrices, using the convex combination technique, and employing some analysis techniques, several stability criteria for the existence of the state estimator are derived for the networks in terms of LMIs, which can be easily calculated by MATLAB LMI Control Toolbox. A numerical example and simulation are given to illustrate the effectiveness and less conservatism of the proposed method.

Notations

The notations used throughout this paper are fairly standard. ℝn and ℝn×m denote, respectively, the n-dimensional Euclidean space and the set of all n×m real matrices. The superscript T denotes the transposition and the notation XY (respectively, X>Y), where X and Y are symmetric matrices, means that XY is positive semidefinite (respectively, positive definite). I n is the n×n identity matrix. ∥⋅∥ is the Euclidean norm in ℝn. diag{⋯} stands for a block diagonal matrix. The notation ∗ always denotes the symmetric block in one symmetric matrix.

2 Problem description and preliminaries

Consider the following neural networks with time-varying delays:

(1)

where x(⋅)=[x 1(⋅),x 2(⋅),…,x n (⋅)]T∈ℝn is neuron state vector; A=diag{a 1,…,a n }>0 is a diagonal matrix with positive entries a i >0; the matrices W 1 and W 2 represent the connection weight matrix and the delayed connection weight matrix respectively; g(x(⋅))=[g 1(x 1(⋅)),…,g n (x n (⋅))]T denotes the neuron activation function; J(t)=[J 1(t),…,J n (t)]T is an external input vector; τ(t), denotes time-varying delays and satisfying

$$ 0 \leq h_{1}\leq \tau(t)\leq h_{2},\qquad \dot{\tau}(t)\leq \mu<\infty, $$
(2)

in which h 1,h 2, and μ are constants.

Assumption 1

The neuron activation function g i (⋅) in (1) satisfies

$$ l_{i}^{-} \leq \frac{g_{i}(a)-g_{i}(b)}{a-b} \leq l_{i}^{+}, $$
(3)

for all a,b∈ℝ, ab, i=1,2,…,n. The constants \(l_{i}^{-}, l_{i}^{+}\) in Assumption 1 are allowed to be positive, negative, or zero. Hence, the resulting activation functions could be nonmonotonic, and more general than the usual sigmoid functions.

The network measurement is expressed by

$$ y(t) = Cx(t), $$
(4)

where y(t)∈ℝm is the measurement output and C∈ℝm×n is a known constant matrix with appropriate dimensions. In this paper, the measurement output is sampled before it enters the estimator. Denote by t k the updating instant time of the Zero-Order-Hold (ZOH), and suppose that the updating signal at the instant t k has experienced a constant signal transmission delay η. It is assumed that the sampling intervals satisfy

$$ t_{k+1}-t_{k}=h_{k}\leq h,\quad k=0, 1, 2,\ldots $$
(5)

where h is a positive scalar and represents the largest sampling interval.

Thus, we have that

$$ t_{k+1}-t_{k}+\eta\leq h+\eta=\rho,\quad k=0, 1, 2,\ldots. $$
(6)

Therefore, the network measurement (4) has a form y(t k )=Cx(t k η). Thus, considering the behavior of ZOH, we have

$$ y(t) = Cx(t_{k}-\eta),\quad t_{k}\leq t<t_{k+1}. $$
(7)

Based on the available sampled measurement (7), the following full-order state estimation for the delayed neural networks (1) is designed:

(8)

where \(\hat{x}(t)\) is the estimation of the neuron state x(t), and K∈ℝn×m is the gain matrix of the estimator to be designed later.

Define d(t)=tt k +η, t k t<t k+1. Therefore, the full-order state estimation of the delayed neural networks (1) can be written as

(9)

Define the error vector by \(e(t)=x(t)-\hat{x}(t)\). Then the error dynamics can be directly obtained from (1) and (9):

(10)

where \(\phi(t)=g(x(t))-g(\hat{x}(t))\) and it can be found (6) that ηd(t)<t k+1t k +ηρ and \(\dot{d}(t)=1\) for tt k .

We will investigate the problem of sampled-data estimation as follows. The main purpose of this paper is to design a sampled-data estimator with the form (9) to estimate the state of the neural network (1) such that the estimation error converges toward zero asymptotically. In other words, we are interested in looking for the estimator gain matrix K such that the error system (10) is asymptotically stable. Before proceeding further, the following essential lemmas are introduced.

Lemma 2.1

[38]

Let z(t)∈W[a,b) and z(a)=0. Then for any n×n matrix R>0 the following inequality holds:

$$\int_{a}^{b}z(s)RZ(s)\,ds\leq \frac{4(b-a)^2}{\pi^2} \int_{a}^{b}\dot{z}(s)R\dot{z}(s)\,ds. $$

Lemma 2.2

[39] (Jensen’s inequality)

For any constant matrices M∈ℝm×m, a scalar γ>0, a vector function ω:[0,γ]→ℝm such that the integrations concerned are well defined, then

Lemma 2.3

(Schur complement)

Given constant matrices Ω 1, Ω 2, and Ω 3 with appropriate dimensions, where \({\varOmega}_{1}^{T}={\varOmega}_{1}\) and \({\varOmega}_{2}^{T}={\varOmega}_{2}>0\), then

$${\varOmega}_{1}+{\varOmega}_{3}^{T}{\varOmega}_{2}^{-1}{\varOmega}_{3}<0 $$

if and only if

3 Main results

In this paper, inspired by [40], we construct a new discontinuous Lyapunov functional to study the stability of the system described by Eq. (10). Let α be a scalar belonging to the interval (0,1). Then the interval [0,h 2] is divided into four subintervals, that is,

(11)

For representation convenience, the following notations are introduced:

We shall establish our main results based on LMI framework.

Theorem 3.1

For given scalars h 2,h 1,μ,η,ρ, and α∈(0,1), the equilibrium point of error-state system (10) is asymptotically stable, if there exist matrices \(P=P^{T}>0, R_{l}=R_{l}^{T}>0\) (l=1,…,6), \(Q_{k}=Q_{k}^{T}>0\) (k=1,2,3), W=W T>0, diagonal matrices U 1>0,U 2>0, any matrix L,G,M a ,N a ,X a , and Y a , (a=1,2) such that the following LMIs hold:

(12)
(13)

where Ξ=(Ξ l,k )11×11 with

and other entries of Ξ are zeros. Moreover, the state estimator gain matrix is given by K=G −1 L.

Proof

Consider the following discontinuous the Lyapunov–Krasovskii functional for system (10) as

(14)

where

Here, it should be noted that V 4(t) can be rewritten as

$$ V_{4}(t) = (\rho-\eta)^2\int_{t-\eta}^{t}\dot{e}^{T}(s)W\dot{e}(s)\,ds+\tilde{V}_{4}(t), $$
(15)

where

According to Lemma 2.1, we can find that \(\tilde{V}_{4}(t)\geq 0\). Moreover, \(\tilde{V}_{4}(t)\) vanishes at t=t k . Hence, \(\lim_{t\rightarrow t_{k}^{-}}V(t)\geq V(t_{k})\).

Calculating the derivative of V(t) along the solution of (10) gives

$$ \dot{V}(t)=\dot{V}_{1}(t)+\dot{V}_{2}(t)+\dot{V}_{3}(t)+\dot{V}_{4}(t), $$
(16)

where

From Lemma 2 in [41], for any matrices M a ,N a ,X a , and Y a (a=1,2), we have that

(17)
(18)
(19)

where

From Assumption 1, we can easily obtain the following inequality:

Thus, for any diagonal matrices U 1=diag{u 11,u 12,…,u 1n }>0 and U 2=diag{u 21,u 22,…,u 2n }>0, it follows that

where e i denotes the unit column vector having “1” element on its ith row and zeros elsewhere.

The above two inequalities are equivalent to

(20)
(21)

From (10) and Leibnitz–Newton formula, for any appropriately dimensioned matrices Y 1,Y 2, and G, the following equations hold:

(22)
(23)

We can easily get the following inequality:

Using (17)–(19) in (16), subtracting (20)–(21) from (16), adding (22) and (23) in (16) and using the relationship GK=L, we have

$$ \dot{V}(t) \leq \xi^{T}(t) {\varXi}_{1} \xi(t), $$
(24)

where \({\varXi}_{1}={\varXi}_{0}+(h_{2}-\tau(t))MQ_{1}^{-1} M^{T}+(1-\alpha)\tau(t) NQ_{1}^{-1}N^{T}+(\tau(t)- h_{1})\alpha XQ_{1}^{-1}X^{T}\) with \({\varXi}_{0}={\varXi}+\alpha h_{1} YQ_{1}^{-1}Y^{T}\).

If Ξ 1<0, there exists a scalar β>0 such that \(\dot{V}(t)\leq - \beta \|e(t)\|^{2}\). Then the error system (10) is asymptotically stable.

Notice that \((h_{2}-\tau(t))MQ_{1}^{-1}M^{T}+(1-\alpha)\tau(t) N \times Q_{1}^{-1}N^{T}+( \tau(t)- h_{1})\alpha XQ_{1}^{-1}X^{T}\) is a convex combination of matrices \(MQ_{1}^{-1}M^{T}\), \((1-\alpha) NQ_{1}^{-1}N^{T}\) and \(\alpha XQ_{1}^{-1}X^{T}\) on τ(t)∈[h 1,h 2], therefore, by convex analysis approach, Ξ 1<0 if and only if

(25)
(26)

which, by Lemma 2.3, are equivalent to (11) and (12). This completes the proof. □

Remark 3.2

The state estimation problem for various kind of neural networks such as neural networks of neutral type, neural networks with distributed delay, neural networks with Markovian jumping parameter, and fuzzy neural networks have been studied in [1125]. Delay-dependent criteria were developed to estimate the neuron states through available output measurements such that the estimation error system is stable. Recently, in [37], the problem of exponential state estimation for delayed recurrent neural networks with sampled-data has been taken into account to derive the required conditions. Different from the existing literature, in this paper, design of state estimator for neural networks with sampled data using discontinuous Lyapunov functional approach has been addressed and the stability conditions have been derived in terms of set of LMIs, which can be checked efficiently by use of standard numerical packages.

Remark 3.3

In Theorem 3.1, by use of α∈(0,1), the integral term \(-\int_{t-h_{2}}^{t}\dot{e}^{T}(s)Q_{2}\dot{e}(s)\,ds\), is divided into four parts such as \(-\int_{t-h_{2}}^{t-\tau(t)}\dot{e}^{T}(s)Q_{1}\dot{e}(s)\,ds\), \(-\!\int_{t-\tau(t)}^{t-\alpha \tau(t)}\dot{e}^{T}(s)Q_{1}\dot{e}(s)\,ds\), \(-\!\int_{t-\alpha \tau(t)}^{t-\alpha h_{1}}\dot{e}^{T}(s)Q_{1}\dot{e}(s)\,ds\), and \(-\int_{t-\alpha h_{1}}^{t}\dot{e}^{T}(s)Q_{1}\dot{e}(s)\,ds\), which may lead to less conservative results. On the other hand, including the few slack matrices in Eqs. (17)–(19), (22), and (23) and using convex combination technique, the delay-dependent less conservative stability conditions are derived for the considered error system.

Remark 3.4

In this paper, Theorem 3.1 provides a delay-dependent condition to ensure the existence of a desired sampled data state estimator for delayed neural networks with time-varying delays by using discontinuous Lyapunov functional approach. The discontinuous Lyapunov functional is introduced in V 4(t), which originates from [35], and makes full use of the sawtooth structure characteristic of sampling input delay. In Theorem 3.1, the delay interval [0,h 2] is divided into four subinterval based on a recent work [40] are shown in (11) by using parameter α, which play an important role in further reduction of conservativeness of the previous results.

If we left sampled data in network measurement, then the corresponding state estimator of (1) can be described as follows:

(27)

Then the error dynamics can be easily obtained from (1) and (27) as

(28)

The following corollary gives delay-dependent stability of error system (28).

Corollary 3.5

For given scalars h 2,h 1,μ, and α∈(0,1), the equilibrium point of error-system (29) is asymptotically stable, if there exist matrices \(P=P^{T}>0, R_{l}=R_{l}^{T}>0\) (l=1,…,4), \(Q_{1}=Q_{1}^{T}>0\) diagonal matrices U 1>0,U 2>0, any matrix L,G,M a ,N a ,X a , and Y a , (a=1, 2) such that the following LMIs hold:

(29)
(30)

where Θ=(Θ l,k )8×8 with

and other entries of Θ are zeros. Moreover, the state estimator gain matrix is given by K=G −1 L.

Proof

Taking R 5=R 6=Q 2=Q 3=W=0 in (14) and the following equation is used for instead of (22):

where γ is a scalar. Then the proof is similar to that of Theorem 3.1 and is omitted here. □

4 Numerical example

In this section, a numerical example is provided along with simulation results to illustrate the potential benefits and effectiveness of the developed method for estimator design of delayed neural networks.

Consider a three-order delayed neural network (1) with the following parameters as in [37]:

and the activation functions are taken as follows:

Further, it satisfies the Assumption 1 with \(l_{1}^{-}=l_{2}^{-}=l_{3}^{-}=0\) and \(l_{1}^{+}=0.2,\ l_{2}^{+}=0.75, \ l_{3}^{+}=0.2\). Thus, we can get the following parameters:

$$L_{1}= \mathrm{diag}\{0 ,0, 0\},\qquad L_{2} = \mathrm{diag}\{0.1, 0.375, 0.1\}. $$

The parameters for output signals of the networks are given as

For the above system, it has reported in [37] that the system is stable when h 2=0.35,μ=0.6, and sampling period was taken as 0.05. However, by using the MATLAB LMI Control Toolbox and Theorem 3.1 with μ=1, sampling interval h=0.2 and the constant delay η=0.04, we obtain the maximum allowable upper bounds listed in Table 1. Further, by Corollary 3.5 with μ=1 and γ=0.15, the maximum allowable upper bounds is also listed in Table 1. It can be easily seen that from Table 1, the allowable upper bounds of delays guarantee the asymptotic stability of error systems.

Table 1 Maximum allowable time delay upper bound h 2 for h=0.2,η=0.02, and μ=1

Here, it should be pointed out that Theorem 3.1 gives less conservative results than Corollary 3.5 because Theorem 3.1 provides new delay dependent criteria under sampled data and the Lyapunov–Krasovskii functional involving a discontinuous term.

Now, assume that h 2=1.2, h 1=0,μ=0.6,α=0.54, the sampling interval h=0.2 and the constant delay η=0.04. Solving the LMIs in Theorem 3.1, we obtain following feasible solutions matrices:

and the state estimation gain K can be designed as

It shows that the established results in this paper is finer than the previous results in Li [37]. For simulation purposes, the time-varying delay is taken as τ(t)=0.3+0.6sint with \(\dot{\tau}(t)=0.6\) and J(t)=[sin(1.8t) sin2(t) cos2(t)]T. In Fig. 1, the responses of the estimation error (10) are given with initial states chosen as x(t)=[−3 0.5 −0.9] and \(\hat{x}(t)=[0.8\quad -0.9\quad 1.39]\). Additionally, the responses of the estimation error (10) for different initial values are shown in Fig. 2. Therefore, it follows from Theorem 3.1 and Corollary 3.5 that the error systems (10) and (28) are asymptotically stable.

Fig. 1
figure 1

The error trajectories

Fig. 2
figure 2

The error trajectories for different initial values

5 Conclusions

In this paper, a discontinuous Lyapunov functional approach has been developed to investigate the state estimation problem for neural networks with time-varying delay with sampled data. The construction of these functional is based on the vector extension of Wirtinger’s inequality. By constructing a new Lyapunov–Krasovskii functional involving discontinuous terms, some integral inequalities and convex combination technique, a novel delay-dependent stability criterion is derived. It is shown that the design of a proper sampled data state estimator is directly accomplished by means of the feasibility of LMIs. Finally, a numerical example is given to demonstrate the effectiveness of this approach and the improvement over existing ones.