Issue
Security and Safety
Volume 2, 2023
Security and Safety in Unmanned Systems
Article Number 2023029
Number of page(s) 16
Section Other Fields
DOI https://doi.org/10.1051/sands/2023029
Published online 12 December 2023

© The Author(s) 2023. Published by EDP Sciences and China Science Publishing & Media Ltd.

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Owing to the rapid development of technology in artificial intelligence, countries around the world have developed unmanned systems. These systems do not require the physical presence of human operators on board and have increasingly autonomous functions, overcoming the limitations of human operators. These applications have gradually permeated various sectors, including national industrial production, social life, defense technology, and others such as self-driving vehicles and unmanned aerial vehicle reconnaissance [1, 2]. Since multiple systems offer more flexibility and robustness than single systems, extensive research has been conducted on multiple unmanned systems (MUSs). In particular, the distributed cooperative control of MUSs has garnered significant attention from scholars [38]. In future control designs, MUSs could be equipped with tiny built-in microprocessors that gather data from adjacent agents and control actions based on pre-established rules, given the ubiquitous nature of nonlinear phenomena in the real world. Consequently, digital platforms are employed for the controllers, and the control policy is modified only at specific time intervals. Thus, the study of nonlinear discrete-time MUSs holds great importance, and promising results have been reported so far [912]. With the advancement of networks, conserving network transmission bandwidth in MUSs has emerged as a prevalent trend.

Quantization is a straightforward yet effective method to alleviate the transmission load in communication channels within MUSs, yielding commendable outcomes [1315]. Concerning control strategies employing quantized data, Zhang et al. [16] addressed quantized control for linear systems with state observers. Similarly, Liu et al. [17, 18] explored input-quantized state feedback control and state-quantized output feedback control for nonlinear systems, respectively. To enhance data transmission’s security and efficiency in resource-constrained communication networks, the encoding-decoding technology based on quantization has captured scholars’ attention. In [19], two encoding-decoding schemes were proposed, enabling the attainment of average consensus in linear systems. Notably, recent extensions [2022] have surfaced; for detailed information, refer to [20], which introduced a control protocol utilizing symmetric compensation techniques alongside a distributed codec scheme relying on quantization to achieve consensus in discrete-time systems within jointly connected networks. Dong [21] pioneered consensus among multiple nonlinear systems with limited bandwidth, introducing a novel approach to designing a distributed controller for both inner and outer loops. In [22], a quantized data-driven method based on a codec mechanism was proposed for nonlinear non-affine systems. Moreover, Ren et al. [23] and Zhu et al. [24] utilized data-driven control methods and an encoding-decoding uniform quantization mechanism, effectively compressing communication data volume between agents. Thus, the development of a practical control scheme for MUSs to alleviate transmission load in communication channels is of paramount importance.

As networks rapidly expand, network security issues have taken on greater prominence. In networked environments, MUSs can achieve efficient cooperation while simultaneously facing increased security risks. This has spurred researchers to delve into the security concerns of MUSs. The communication networks of MUSs are susceptible to denial-of-service and false data injection (FDI) attacks. FDI attacks occur when an adversary intercepts communication between agents and deliberately injects inaccurate packets [25]. In [26], an optimal FDI attack scheme based on historical and current residuals was proposed to maximally undermine performance from the attacker’s perspective. Li et al. [27] examined security consensus in a multiple input multiple output system under FDI attacks, employing an event-triggering proposal. Similarly, Meng et al. [28] presented the adaptive resilient control problem for linear multiple systems subjected to sensor and actuator attacks. Zhang et al. [29] introduced a resilient observer-based control strategy triggered by events to address secure consensus in multiple-agent scenarios under FDI attacks following Bernoulli processes. In addressing unknown FDI attacks, Wang et al. [30] proposed an observer-based fully asynchronous event-triggered controller to achieve bipartite consensus among multiple agents. In the context of output consensus driven under FDI attacks, Huo et al. [31] developed state and output-feedback cooperative control strategies. Hu et al. [32] tackled stochastic analysis and controller design problems for networked systems, considering false data injection attacks in the sensor-controller and controller-actuator channels. Handling quasi-consensus in stochastic nonlinear time-varying multi-agent systems with multi-modal FDI models was addressed in [33]. However, due to exposure, the communication networks among agents are more vulnerable to attacks. To address this, Tahoun and Arafa [34] designed a distributed adaptive secure control scheme for multi-agent networked systems under unknown FDI and replay attacks. Consequently, ensuring the reliability and security of communication networks necessitates the exploration of secure control techniques for networked nonlinear MUSs vulnerable to FDI attacks.

As a consequence, secure cooperative control of MUSs under FDI attacks has garnered significant interest and importance. Drawing from prior research, quantization-based coding and decoding techniques have demonstrated the capability to achieve secure and efficient data transmission within resource-constrained digital communication networks. In light of this, the current study delves into the secure cooperative control challenge posed by discrete-time nonlinear MUSs in the presence of FDI attacks. The study also introduces a quantization-based codec scheme designed to facilitate efficient and secure data transmission in resource-constrained digital communication networks. Existing quantization-based coding and decoding techniques have predominantly been applied to linear multi-agent systems [19, 20, 23, 24], or specifically to the controller-to-actuator component of an agent [35, 36]. However, there is a noticeable dearth of results for high-order unknown nonlinear multi-agent systems. This gap arises due to the complexities introduced by high-order nonlinearity and unknown variables, rendering the application of existing quantization-based codec techniques challenging. Moreover, in the presence of cyber attacks, these conventional methods fail to ensure system implementation or consistency. In scenarios involving network attacks, existing techniques also fall short of guaranteeing the system’s ability to achieve coherence. Consequently, the adaptation of quantization-based codec techniques to high-order unknown nonlinear MUSs under FDI attacks represent a formidable challenge, which serves as the impetus behind this paper. Consequently, this paper proposes a codec-based neural network control approach for addressing the distributed quantized cooperative control problem within high-order nonlinear multi-agent systems. The framework takes into account transmission channels between agents that are potentially subject to FDI.

The remainder of this study is organized as follows: Section 2 presents the preliminaries and problem formulation. The design of secure consensus control and the stability analysis are detailed in Sections 3 and 4, respectively. Section 5 provides simulation examples, while Section 6 concludes the study.

2. Preliminaries

2.1. System model

This study discusses the secure cooperative control of MUSs comprising N agents, wherein each agent is described by

{ x i , m ( k + 1 ) = g i , m x i , m + 1 ( k ) + ϝ i , m ( x ¯ i , m ( k ) ) , x i , n ( k + 1 ) = g i , n u i ( k ) + ϝ i , n ( x ¯ i , n ( k ) ) , y i ( k ) = x i , 1 ( k ) , $$ \begin{aligned} \left\{ \begin{aligned} x_{i,m}(k+1)&=g_{i,m}x_{i,m+1}(k)+\digamma _{i,m}(\bar{x}_{i,m}(k)),\\ x_{i,n}(k+1)&=g_{i,n}u_{i}(k)+\digamma _{i,n}(\bar{x}_{i,n}(k)), \\ y_{i}(k)&=x_{i,1}(k), \end{aligned}\right. \end{aligned} $$(1)

where m = 1, …, n − 1, i = 1, ⋯, N, k = 0, 1, 2, …, x ¯ i , n ( k ) = [ x i , 1 ( k ) , , x i , n ( k ) ] T $ \bar{x}_{i,n}(k)=[x_{i,1}(k),\ldots,x_{i,n}(k)]^T $ is the system state, gi, m is an unknown positive constant gain, ϝ i , m ( x ¯ i , m ( k ) ) $ \digamma_{i,m}(\bar{x}_{i,m}(k)) $ is an unknown nonlinear Lipschitz function, ui(k) is the plant control input, and yi(k) is the plant output.

Remark 1. System (1) is a unified structure of multiple MUSs, such as the motion of aircraft wings [37], robotic manipulators [38], and ship steering systems [39]. Each variable in (1) depends on the specific characteristics of the actual system.

Definition 1. ϝ(x):Ω → R is the Lipschitz function if there exists a Ψ such that

| ϝ ( x ) ϝ ( y ) | Ψ | x y | $$ \begin{aligned} |\digamma (x)-\digamma (y)|&\le \mathrm{\Psi }|x-y| \end{aligned} $$

where x, y ∈ Ω.

Definition 2. The solution of the system (1) is uniformly ultimately bounded, if for all states x(0)∈D, and ∀k ≥ ℝ(ϵ, x(0)), ∥x(k)∥ ≤ ϵ hold, wherein D ⊂ ℝ is a compact set, N(ϵ, x(0)) is a number, and ϵ >  0.

2.2. Communication graph

A directed graph is used to describe information communication among agents. 𝒢 = (𝒩, ℰ, 𝒜) is introduced with a set of vertices 𝒩 = {1, …, N}, directed channels ℰ, and relevant adjacency matrices 𝒜 = (ai, j). If information is transmitted from vertex j to i, (j, i) is defined as a graph edge (a directed channel), and, ai, j is defined as the weight. If and only if (j, i)∈ℰ do we obtain, ai, j >  0; else, we obtain ai, j = 0. The set of neighboring vertexes i is defined as 𝒩i = {j|(j,i)∈ℰ}. The in-degree of vertex i is denoted as D = diag(d1, …, dN) with d i = j = 1 N a i , j $ d_{i}=\sum^{N }_{j=1} a_{i,j} $. The Laplacian matrix of 𝒢 is represented by ℒ = D − 𝒜, and ℒ1N × 1 = 0 with 1N × 1 = [1, 1, …, 1].

Assumption 1. A graph is considered connected if there exists a path between any two nodes, which is formed by a series of edges. For the purposes of this study, assume that the directed graph 𝒢 is connected.

2.3. Encoding-decoding scheme

This subsection introduces the designed codec, intended for later use in the controller design. Its purpose is to employ the quantizer for the reduction of information transfer between agents and to address the issue of insufficient bandwidth. Precisely, in this scheme, each transmission channel encodes the sender’s state value as a data point prior to transmission. Subsequently, the receiver employs a decoder to estimate the sender’s state after receiving the data. The encoder Ei corresponding to agent i is expressed as follows:

{ ξ i , r ( 0 ) = 0 , ξ i , r ( k ) = ξ i , r ( k 1 ) + s i , r a ( k ) s i , r ( k ) = q ( x i , r ( k ) ξ i , r ( k 1 ) ) $$ \begin{aligned} \left\{ \begin{aligned} \xi _{i,r}(0)&=0,\\ \xi _{i,r}(k)&=\xi _{i,r}(k-1)+s_{i,r}^{a}(k) \qquad \qquad \\ s_{i,r}(k)&=q(x_{i,r}(k)-\xi _{i,r}(k-1))\end{aligned}\right. \end{aligned} $$(2)

where ξi, r(k)(r = 1, …, n) are the internal states of Ei, xi, r(k) is the input, and s i , r a ( k ) $ s_{i,r}^{a}(k) $ is the output and is transmitted to the neighbor. Here, q is a quantizer, which is expressed as follows

q ( μ ) = { q l ( ζ th ) + μ ζ th h + o h , & | μ | ζ th q l ( ζ th ) , & | μ | ζ th $$ \begin{aligned} q(\mu )=\left\{ \begin{aligned}&q_{l}\left(\zeta _{th}\right)+\left\lfloor \frac{\mu -\zeta _{th}}{h}+o\right\rfloor h,\&|\mu |\ge \zeta _{th} \\&q_{l}\left(\zeta _{th}\right),\&|\mu |\le \zeta _{th}\end{aligned}\right. \end{aligned} $$(3)

where the specifies constant ζt h >  0 that serves as the threshold for switching between the logarithmic and uniform quantizers; The uniform quantizer error is characterized by a parameter h = |ql(ζt h)−ζt h|, where ql(μ) represents the output of a logarithmic quantizer, as expressed below

q l ( μ ) = { ζ ι s g n ( μ ) , & ζ ι 1 + ϵ < | μ | ζ ι 1 ϵ 0 , & | μ | ζ min 1 + ϵ $$ \begin{aligned} q_{l}(\mu )=\left\{ \begin{aligned}&\zeta _{\iota } sgn(\mu ),\&\frac{\zeta _{\iota }}{1+\epsilon }<|\mu |\frac{\zeta _{\iota }}{1-\epsilon } \\&0,\&|\mu |\le \frac{\zeta _{\min }}{1+\epsilon }\end{aligned}\right. \end{aligned} $$(4)

with ζ min 1 + ϵ > 0 $ \frac{\zeta_{\min}}{1+\epsilon} > 0 $ determining the dead zone size for ql(μ), ζι = κ1 − ι ζmin, ι = 1, 2, ⋯ and κ = 1 ϵ 1 + ϵ $ \kappa=\frac{1-\epsilon}{1+\epsilon} $ with 0 <  ϵ <  1. Additionally, the variable o is defined as follows: o = 1 if ql(ζt h)< ζt h, o = 0 otherwise. The threshold ζt h is a pre-set positive constant for switching between the logarithmic and uniform. The quantization error |qi j(μ)−μ| is bounded by max { ϵ ϵ 2 1 + ϵ ζ th , h } $ \max\{\frac{\epsilon-\epsilon^2}{1+\epsilon}\zeta_{th}, h\} $.

Remark 2. The logarithmic quantizer reduces quantization errors when the amplitude is small. However, as the signal amplitude increases, the quantization levels of the logarithmic quantizer become coarser. To address this issue, a logarithmic-uniform quantizer is employed. This quantizer aims to minimize the average communication rate across instances.

Assuming that the communication between agents (j, i) may be subject to an FDI attack, s j , r i ( k ) = s j , r a ( k ) + δ j i ( k ) $ s_{j,r}^{i}(k)=s_{j,r}^{a}(k)+\delta_{j}^{i}(k) $ is the signal received by agent i from neighbor agent j after the FDI attack. The decoder D j i $ D_{j}^{i} $ in agent i is designed as follows

{ ξ j , r i ( 0 ) = 0 , ξ j , r i ( k ) = ξ j , r i ( k 1 ) + s j , r i ( k ) δ ̂ j i ( k ) $$ \begin{aligned} \left\{ \begin{aligned} \xi _{j,r}^{i}(0)&=0,\\ \xi _{j,r}^{i}(k)&=\xi _{j,r}^{i}(k-1)+s_{j,r}^{i}(k)-\hat{\delta }_{j}^{i}(k) \end{aligned}\right. \end{aligned} $$(5)

where δ ̂ j i ( k ) = δ ̂ j i ( k 1 ) L i ( K i z i , 1 ( k 1 ) + δ ̂ j i ( k 1 ) ) $ \hat{\delta}_{j}^{i}(k)=\hat{\delta}_{j}^{i}(k-1)-L_{i}(K_{i}z_{i,1}(k-1)+\hat{\delta}_{j}^{i}(k-1)) $ is the estimation of the false signal δ j i ( k ) $ \delta_{j}^{i}(k) $, with Li, Ki being designed parameters and zi, 1 being designed as (6), ξ j , r i ( k ) ( r = 1 , , n ) $ \xi_{j,r}^{i}(k) (r=1,\ldots,n) $ are the outputs of D j i $ D_{j}^{i} $. Especially, when the communication is no attack, one has s j , r i ( k ) = s j , r a ( k ) $ s_{j,r}^{i}(k)=s_{j,r}^{a}(k) $ and δ ̂ j i ( k ) = 0 $ \hat{\delta}_{j}^{i}(k)=0 $.

Lemma 1. Considering an unknown nonlinear MUS (1), the encoding scheme (2) can estimate the state xi.

Proof Denote ei, r(k)=xi, r(k)−ξi, r(k), which can be rewritten as

e i , r ( k ) = ( x i , r ( k ) ξ i , r ( k 1 ) ) s i , r ( k ) = Δ i , r ( k ) $$ \begin{aligned} e_{i,r}(k)&=(x_{i,r}(k)-\xi _{i,r}(k-1))-s_{i,r}(k)=-\mathrm{\Delta }_{i,r}(k) \end{aligned} $$

where Δi, r = q(μ)−μ represents the quantized error. Thus, the errors ei, r(k) are bounded by Δ ¯ i , r = max { m m 2 1 + m ζ th , h } $ \bar{{\mathrm{\Delta}}}_{i,r}=\max\{\frac{m-m^2}{1+m}\zeta_{th}, h\} $.▫

Remark 3. Unlike observers that are utilized for state estimation or observation within systems, encoders and decoders primarily handle the encoding and decoding of signals. An encoder/decoder constitutes a system or algorithm used for the encoding and decoding of signals or data. The encoder transforms the input signal into a designated encoding format, while the decoder reverts the encoded signal back to its original form. Their advantage lies in conserving communication bandwidth.

Assumption 2. The false signal δ j i ( k ) $ \delta_{j}^{i}(k) $ is bounded, that is, | δ j i ( k ) | δ ¯ j i $ |\delta_{j}^{i}(k)|\leq \bar{\delta}_{j}^{i} $.

3. Distributed state-feedback controller design

This section outlines the structure of a distributed controller based on a codec scheme within a backstepping framework. The following error variables are established

z i , 1 ( k ) = j = 1 N a i , j ( ξ i , 1 ( k ) ξ j , 1 i ( k ) ) $$ \begin{aligned} z_{i,1}(k)&=\sum _{j=1}^{N}a_{i,j}(\xi _{i,1}(k)-\xi _{j,1}^{i}(k)) \end{aligned} $$(6) z i , r ( k ) = ξ i , r ( k ) α i , r 1 ( k ) $$ \begin{aligned}z_{i,r}(k)&=\xi _{i,r}(k)-\alpha _{i,r-1}(k) \end{aligned} $$(7)

where αi, r − 1(r = 2, …, n) represent the intermediate variables to be designed.

Step 1. According to (6), zi, 1(k + 1) can be produced

z i , 1 ( k + 1 ) = j = 1 N a i , j [ Δ i , 1 j ( k + 1 ) + g i , 1 [ e i , 2 ( k ) + z i , 2 ( k ) + α i , 1 ( k ) ] + ϝ i , 1 ( x i , 1 ) ϝ i , 1 ( ξ i , 1 ) + ϝ i , 1 ( ξ i , 1 ) ϝ ¯ j , 1 ( χ j , 1 ) ] $$ \begin{aligned} z_{i,1}(k+1)&=\sum _{j=1}^{N}a_{i,j}\left[\mathrm{\Delta }_{i,1}^{j}(k+1)+g_{i,1}\left[e_{i,2}(k)+z_{i,2}(k)+\alpha _{i,1}(k)\right] \right.\nonumber \\&\quad +\!\left.\digamma _{i,1}\left(x_{i,1}\right)-\digamma _{i,1}\left(\xi _{i,1}\right) +\digamma _{i,1}\left(\xi _{i,1}\right) -\bar{\digamma }_{j,1}\left(\chi _{j,1}\right)\right] \end{aligned} $$(8)

where ξ j , 1 ( k + 1 ) = ϝ ¯ j , 1 ( χ j , 1 ) $ \xi_{j,1}(k+1)=\bar{\digamma}_{j,1}(\chi_{j,1}) $, ϝ ¯ j , 1 ( χ j , 1 ( k ) ) $ \bar{\digamma}_{j,1}(\chi_{j,1}(k)) $ is a nonlinear mapping, χ j , 1 = [ ξ j , 1 ( k ) , s j i ( k ) , δ ̂ j i ( k ) ] T $ \chi_{j,1}=[\xi_{j,1}(k),s_{j}^{i}(k),\hat{\delta}_{j}^{i}(k)]^T $. The desired feedback control is established and approximated using a neural network (NN) [40] as follows

α i , 1 ( k ) = 1 g i , 1 [ ϝ i , 1 ( ξ i , 1 ) + ϝ ¯ j , 1 ( χ j , 1 ) ] = w i , 1 T ϕ i , 1 ( ϑ i , 1 T X i , 1 ) + ε i , 1 ( k ) $$ \begin{aligned} \alpha _{i,1}^{*}(k)&=\frac{1}{g_{i,1}}\left[-\digamma _{i,1}\left(\xi _{i,1}\right) +\bar{\digamma }_{j,1}\left(\chi _{j,1}\right)\right] \nonumber \\&=w_{i,1}^{T}\phi _{i,1}\left(\vartheta _{i,1}^{T}X_{i,1}\right)+\varepsilon _{i,1}(k) \end{aligned} $$(9)

where wi, 1 and ϑi, 1 are weights of the output and hidden layer, ϕi, 1 is the function of the hidden layer, Xi, 1 = [ξi, 1(k),χj, 1(k)]T is the input of NN, and approximation error | ε i , 1 ( k ) | ε ¯ i , 1 $ |\varepsilon_{i,1}(k)|\leq \bar{\varepsilon}_{i,1} $. Noting

| ϝ i , 1 ( x i , 1 ) ϝ i ( ξ i , 1 ) | Ψ i , 1 | x i , 1 ξ i , 1 | Ψ i , 1 e i , 1 ( k ) . $$ \begin{aligned} \left|\digamma _{i,1}\left(x_{i,1}\right)-\digamma _{i}\left(\xi _{i,1}\right)\right|&\le \mathrm{\Psi }_{i,1}\left|x_{i,1}-\xi _{i,1}\right|\nonumber \\&\le \mathrm{\Psi }_{i,1}e_{i,1}(k). \end{aligned} $$(10)

Choose suitable positive parameters Ψi, 1 and m ¯ i , 1 $ \bar{m}_{i,1} $ such that | Δ i , 1 j ( k + 1 ) + g i , 1 e i , 2 ( k ) + Ψ i , 1 e i , 1 ( k ) | m ¯ i , 1 Δ ¯ i $ |{\mathrm{\Delta}}_{i,1}^{j}(k+1)+g_{i,1}e_{i,2}(k)+{\mathrm{\Psi}}_{i,1}e_{i,1}(k)|\leq \bar{m}_{i,1}\bar{{\mathrm{\Delta}}}_{i} $ holds. Design αi, 1(k) and w ̂ i , 1 ( k + 1 ) $ \hat{w}_{i,1}(k+1) $ as follows

α i , 1 ( k ) = w ̂ i , 1 T ( k ) ϕ i , 1 ( k ) $$ \begin{aligned} \alpha _{i,1}(k)&=\hat{w}_{i,1}^{T}(k)\phi _{i,1}(k) \end{aligned} $$(11) w ̂ i , 1 ( k + 1 ) = w ̂ i , 1 ( k ) l i , 1 ϕ i , 1 ( k ) ( k i , 1 z i , 1 ( k ) + w ̂ i , 1 T ( k ) ϕ i , 1 ( k ) ) $$ \begin{aligned} \hat{w}_{i,1}(k+1)&=\hat{w}_{i,1}(k)-l_{i,1}\phi _{i,1}(k)\left(k_{i,1}z_{i,1}(k)+\hat{w}_{i,1}^{T}(k)\phi _{i,1}(k)\right) \end{aligned} $$(12)

where li, 1 and ki, 1 are positive parameters. Then, one can obtain

z i , 1 ( k + 1 ) = j = 1 N a i , j g i , 1 [ z i , 2 ( k ) + w ~ i , 1 ( k ) ϕ i , 1 ( k ) + ε i , 1 ( k ) ] + d i m ¯ i , 1 Δ ¯ i $$ \begin{aligned} z_{i,1}(k+1)&=\sum _{j=1}^{N}a_{i,j}g_{i,1}\left[z_{i,2}(k)+\tilde{w}_{i,1}(k)\phi _{i,1}(k)+\varepsilon _{i,1}(k)\right] +d_{i}\bar{m}_{i,1}\bar{\mathrm{\Delta }}_{i} \end{aligned} $$(13)

where w ~ i , 1 ( k ) = w ̂ i , 1 ( k ) w i , 1 $ \tilde{w}_{i,1}(k)=\hat{w}_{i,1}(k)-w_{i,1} $ is approximate error.

Step ϒ(ϒ = 1, … n1). Based on zi, Υ(k)=ξi, Υ(k)−αi, Υ − 1, zi, Υ(k + 1) is computed

z i , Υ ( k + 1 ) = Δ i , Υ ( k + 1 ) + g i , Υ [ e i , Υ + 1 ( k ) + z i , Υ + 1 ( k ) + α i , Υ ( k ) ] + ϝ i , Υ ( x ¯ i , Υ ) ϝ i , Υ ( ξ ¯ i , Υ ) + ϝ i , Υ ( ξ ¯ i , Υ ) ϝ ¯ i , Υ ( α i , Υ 1 ( k ) ) $$ \begin{aligned} z_{i,\mathrm{\Upsilon }}(k+1)&=\mathrm{\Delta }_{i,\mathrm{\Upsilon }}(k+1) +g_{i,\mathrm{\Upsilon }}\left[e_{i,\mathrm{\Upsilon }+1}(k)+z_{i,\mathrm{\Upsilon }+1}(k) +\alpha _{i,\mathrm{\Upsilon }}(k)\right] \nonumber \\&\quad +\digamma _{i,\mathrm{\Upsilon }}\left(\bar{x}_{i,\mathrm{\Upsilon }}\right) -\digamma _{i,\mathrm{\Upsilon }}\left(\bar{\xi }_{i,\mathrm{\Upsilon }}\right) +\digamma _{i,\mathrm{\Upsilon }}\left(\bar{\xi }_{i,\mathrm{\Upsilon }}\right) -\bar{\digamma }_{i,\mathrm{\Upsilon }}\left(\alpha _{i,\mathrm{\Upsilon }-1}(k)\right) \end{aligned} $$(14)

where α i , Υ 1 ( k + 1 ) = ϝ ¯ i , Υ ( α i , Υ 1 ( k ) ) $ \alpha_{i,{\mathrm{\Upsilon}}-1}(k+1)=\bar{\digamma}_{i,{\mathrm{\Upsilon}}}(\alpha_{i,{\mathrm{\Upsilon}}-1}(k)) $, ϝ ¯ i , Υ ( α i , Υ 1 ( k ) ) $ \bar{\digamma}_{i,{\mathrm{\Upsilon}}}(\alpha_{i,{\mathrm{\Upsilon}}-1}(k)) $ is a nonlinear mapping.

The desired feedback control is established and approximated it by an NN as follows

α i , Υ ( k ) = 1 g i , Υ [ ϝ i , Υ ( ξ i , Υ ) + ϝ ¯ i , Υ ( α i , Υ 1 ( k ) ) ] = w i , Υ T ϕ i , Υ ( ϑ i , Υ T X i , Υ ) + ε i , Υ ( k ) $$ \begin{aligned} \alpha _{i,\mathrm{\Upsilon }}^{*}(k)&=\frac{1}{g_{i,\mathrm{\Upsilon }}} \left[-\digamma _{i,\mathrm{\Upsilon }}\left(\xi _{i,\mathrm{\Upsilon }}\right) +\bar{\digamma }_{i,\mathrm{\Upsilon }}\left(\alpha _{i,\mathrm{\Upsilon }-1}(k)\right)\right] \nonumber \\&=w_{i,\mathrm{\Upsilon }}^{T}\phi _{i,\mathrm{\Upsilon }} \left(\vartheta _{i,\mathrm{\Upsilon }}^{T}X_{i,\mathrm{\Upsilon }}\right) +\varepsilon _{i,\mathrm{\Upsilon }}(k) \end{aligned} $$(15)

where X i , Υ = [ ξ ¯ i , Υ T ( k ) , x ̂ j , 1 ( k ) , w ̂ ¯ i , Υ 1 T ( k ) ] T $ X_{i,{\mathrm{\Upsilon}}}=[\bar{\xi}_{i,{\mathrm{\Upsilon}}}^T(k),\hat{x}_{j,1}(k),\bar{\hat{w}}^T_{i,{\mathrm{\Upsilon}}-1}(k)]^T $, | ε i , Υ ( k ) | ε ¯ i , Υ $ |\varepsilon_{i,{\mathrm{\Upsilon}}}(k)|\leq \bar{\varepsilon}_{i,{\mathrm{\Upsilon}}} $.

Likewise,

| ϝ i , Υ ( x ̂ i , Υ ) ϝ i , Υ ( ξ i , Υ ) | Ψ i , Υ o = 1 Υ | x i , o ξ i , o | Ψ i , Υ o = 1 r e i , o ( k ) $$ \begin{aligned} \left|\digamma _{i,\mathrm{\Upsilon }}\left(\hat{x}_{i,\mathrm{\Upsilon }}\right) -\digamma _{i,\mathrm{\Upsilon }}\left(\xi _{i,\mathrm{\Upsilon }}\right)\right|&\le \mathrm{\Psi }_{i,\mathrm{\Upsilon }}\sum _{o=1}^{\mathrm{\Upsilon }} \left|x_{i,o}-\xi _{i,o}\right|\le \mathrm{\Psi }_{i,\mathrm{\Upsilon }}\sum _{o=1}^{r}e_{i,o}(k) \end{aligned} $$(16)

and | Δ i , Υ ( k + 1 ) + g i , Υ e i , Υ + 1 ( k ) + Ψ i , Υ o = 1 Υ e i , o ( k ) | m ¯ i , Υ Δ ¯ i $ |{\mathrm{\Delta}}_{i,{\mathrm{\Upsilon}}}(k+1)+g_{i,{\mathrm{\Upsilon}}}e_{i,{\mathrm{\Upsilon}}+1}(k)+{\mathrm{\Psi}}_{i,{\mathrm{\Upsilon}}}\sum_{o=1}^{{\mathrm{\Upsilon}}}e_{i,o}(k)|\leq \bar{m}_{i,{\mathrm{\Upsilon}}}\bar{{\mathrm{\Delta}}}_{i} $, where Ψi, Υ and m ¯ i , Υ $ \bar{m}_{i,{\mathrm{\Upsilon}}} $ are positive constants.

Construct the virtual controller and adaptive law as follows

α i , Υ ( k ) = w ̂ i , Υ T ( k ) ϕ i , Υ ( k ) $$ \begin{aligned} \alpha _{i,\mathrm{\Upsilon }}(k)&=\hat{w}_{i,\mathrm{\Upsilon }}^{T}(k) \phi _{i,\mathrm{\Upsilon }}(k) \end{aligned} $$(17) w ̂ i , Υ ( k + 1 ) = w ̂ i , Υ ( k ) l i , Υ ϕ i , Υ ( k ) ( k i , Υ z i , Υ ( k ) + w ̂ i , Υ T ( k ) ϕ i , Υ ( k ) ) $$ \begin{aligned}\hat{w}_{i,\mathrm{\Upsilon }}(k+1)&=\hat{w}_{i,\mathrm{\Upsilon }}(k) -l_{i,\mathrm{\Upsilon }}\phi _{i,\mathrm{\Upsilon }}(k) \left(k_{i,\mathrm{\Upsilon }}z_{i,\mathrm{\Upsilon }}(k)+\hat{w}_{i,\mathrm{\Upsilon }}^{T}(k)\phi _{i,\mathrm{\Upsilon }}(k)\right) \end{aligned} $$(18)

where li, Υ and ki, Υ are positive constants.

Thus, one has

z i , Υ ( k + 1 ) = g i , Υ [ z i , Υ + 1 ( k ) + w ~ i , Υ ( k ) ϕ i , Υ ( k ) + ε i , Υ ( k ) ] + m ¯ i , Υ Δ ¯ i $$ \begin{aligned} z_{i,\mathrm{\Upsilon }}(k+1)&=g_{i,\mathrm{\Upsilon }} \left[z_{i,\mathrm{\Upsilon }+1}(k)+\tilde{w}_{i,\mathrm{\Upsilon }}(k) \phi _{i,\mathrm{\Upsilon }}(k)+\varepsilon _{i,\mathrm{\Upsilon }}(k)\right] +\bar{m}_{i,\mathrm{\Upsilon }}\bar{\mathrm{\Delta }}_{i} \end{aligned} $$(19)

where w ~ i , Υ ( k ) = w ̂ i , Υ ( k ) w i , Υ $ \tilde{w}_{i,{\mathrm{\Upsilon}}}(k)=\hat{w}_{i,{\mathrm{\Upsilon}}}(k)-w_{i,{\mathrm{\Upsilon}}} $.

Step n. Depending on zi, n(k)=ξi, n(k)−αi, n − 1, we calculate zi, n(k + 1) as follows

z i , n ( k + 1 ) = Δ i , n j ( k + 1 ) + g i , n u i ( k ) ϝ ¯ i , n ( α i , n 1 ( k ) ) + ϝ i , n ( x ¯ i , n ) ϝ i , n ( ξ ¯ i , n ) + ϝ i , n ( ξ ¯ i , n ) $$ \begin{aligned} z_{i,n}(k+1)&=\mathrm{\Delta }_{i,n}^{j}(k+1)+g_{i,n}u_{i}(k) -\bar{\digamma }_{i,n}\left(\alpha _{i,n-1}(k)\right) \nonumber \\&\quad +\digamma _{i,n}\left(\bar{x}_{i,n}\right)-\digamma _{i,n}\left(\bar{\xi }_{i,n}\right) +\digamma _{i,n}\left(\bar{\xi }_{i,n}\right) \end{aligned} $$(20)

where α i , n 1 ( k + 1 ) = ϝ ¯ i , n ( α i , n 1 ( k ) ) $ \alpha_{i,n-1}(k+1)=\bar{\digamma}_{i,n}(\alpha_{i,n-1}(k)) $, ϝ ¯ i , n ( α i , n 1 ( k ) ) $ \bar{\digamma}_{i,n}(\alpha_{i,n-1}(k)) $ is a nonlinear mapping.

An ideal feedback control is defined and approximated by an NN as

u i ( k ) = 1 g i , n [ ϝ i , n ( ξ i , n ) + ϝ ¯ i , n ( α i , n 1 ( k ) ) ] = w i , n T ϕ i , n ( ϑ i , n T X i , n ) + ε i , n ( k ) $$ \begin{aligned} u_{i}^{*}(k)&=\frac{1}{g_{i,n}}\left[-\digamma _{i,n}\left(\xi _{i,n}\right) +\bar{\digamma }_{i,n}\left(\alpha _{i,n-1}(k)\right)\right] \nonumber \\&=w_{i,n}^{T}\phi _{i,n}\left(\vartheta _{i,n}^{T}X_{i,n}\right)+\varepsilon _{i,n}(k) \end{aligned} $$(21)

where X i , n = [ ξ ¯ i , n T ( k ) , x ̂ j , 1 ( k ) , w ̂ ¯ i , n 1 T ( k ) ] T $ X_{i,n}=[\bar{\xi}_{i,n}^T(k),\hat{x}_{j,1}(k),\bar{\hat{w}}^T_{i,n-1}(k)]^T $, | ε i , n ( k ) | ε ¯ i , n $ |\varepsilon_{i,n}(k)|\leq \bar{\varepsilon}_{i,n} $.

Observing

| ϝ i , n ( x ̂ i , n ) ϝ i , n ( ξ i , n ) | Ψ i , n β = 1 n | x i , β ξ i , β | Ψ i , n Υ = 1 n e i , β ( k ) $$ \begin{aligned} \left|\digamma _{i,n}\left(\hat{x}_{i,n}\right)-\digamma _{i,n}\left(\xi _{i,n}\right)\right|&\le \mathrm{\Psi }_{i,n}\sum _{\beta =1}^{n} \left|x_{i,\beta }-\xi _{i,\beta }\right|\le \mathrm{\Psi }_{i,n}\sum _{\mathrm{\Upsilon }=1}^{n}e_{i,\beta }(k) \end{aligned} $$(22)

and choosing positive constants Ψi, n and m ¯ i , n $ \bar{m}_{i,n} $, then, | Δ i , n j ( k + 1 ) + Ψ i , n Υ = 1 n e i , Υ ( k ) | m ¯ i , n Δ ¯ i $ |{\mathrm{\Delta}}_{i,n}^{j}(k+1)+{\mathrm{\Psi}}_{i,n}\sum_{{\mathrm{\Upsilon}}=1}^{n}e_{i,{\mathrm{\Upsilon}}}(k)|\leq \bar{m}_{i,n}\bar{{\mathrm{\Delta}}}_{i} $ holds.

The actual controller and adaptive law are established as follows

u i ( k ) = w ̂ i , n T ( k ) ϕ i , n ( k ) $$ \begin{aligned} u_{i}(k)&=\hat{w}_{i,n}^{T}(k)\phi _{i,n}(k) \end{aligned} $$(23) w ̂ i , n ( k + 1 ) = w ̂ i , n ( k ) l i , n ϕ i , n ( k ) ( k i , n z i , n ( k ) + w ̂ i , n T ( k ) ϕ i , n ( k ) ) $$ \begin{aligned} \hat{w}_{i,n}(k+1)&=\hat{w}_{i,n}(k)-l_{i,n}\phi _{i,n}(k)\left(k_{i,n}z_{i,n}(k)+\hat{w}_{i,n}^{T}(k)\phi _{i,n}(k)\right) \end{aligned} $$(24)

where parameters li, n >  0 and ki, n >  0.

Thus,

z i , n ( k + 1 ) = g i , n [ w ~ i , n ( k ) ϕ i , n ( k ) + ε i , n ( k ) ] + m ¯ i , n Δ ¯ i $$ \begin{aligned} z_{i,n}(k+1)&=g_{i,n}\left[\tilde{w}_{i,n}(k)\phi _{i,n}(k)+\varepsilon _{i,n}(k)\right] +\bar{m}_{i,n}\bar{\mathrm{\Delta }}_{i} \end{aligned} $$(25)

holds, where w ~ i , n ( k ) = w ̂ i , n ( k ) w i , n $ \tilde{w}_{i,n}(k)=\hat{w}_{i,n}(k)-w_{i,n} $.

The block diagram of the design process is shown in Figure 1.

thumbnail Figure 1.

Block diagram of design process

4. Stability analysis

Theorem 1.

For a multi-agent system composed of N agents utilizing the unknown nonlinear system (1), if the conditions

0 < r i , m < 2 , 0 < l i , m < 1 , m = 1 , , n , 0 < L i < 1 k i , 1 < r i , 1 16 g ¯ i , 1 2 k i , o < r i , o 16 g ¯ i , o 2 r i , o 1 4 , o = 2 , , n 1 k i , n < r i , n 12 g ¯ i , n 2 r i , n 1 4 $$ \begin{aligned} 0<r_{i,m}<2,\ 0&<l_{i,m}<1,\ m=1,\ldots ,n, \ 0<L_{i}<1\nonumber \\ k_{i,1}&<\sqrt{\frac{r_{i,1}}{16\bar{g}_{i,1}^2}} \nonumber \\ k_{i,o}&<\sqrt{\frac{r_{i,o}}{16\bar{g}_{i,o}^2}-\frac{r_{i,o-1}}{4}},\ o=2,\ldots ,n-1 \nonumber \\ k_{i,n}&<\sqrt{\frac{r_{i,n}}{12\bar{g}_{i,n}^2}-\frac{r_{i,n-1}}{4}} \end{aligned} $$

are satisfied, the designed distributed control scheme (23) ensures that the consensus error and all the signals are semi-globally bounded.

Proof Adopt the Lynapunov function

V i ( k ) = r i , 1 8 g ¯ i , 1 2 z i , 1 2 ( k ) + o = 2 n 1 r i , o d i 2 8 g ¯ i , o 2 z i , o 2 ( k ) + d i 2 6 g ¯ i , n 2 z i , n 2 ( k ) + o = 1 n d i 2 l i , o w ~ i , o T ( k ) w ~ i , o ( k ) + j N i 1 L i ( δ ~ j i ) 2 ( k ) . $$ \begin{aligned} V_{i}(k)&=\frac{r_{i,1}}{8\bar{g}_{i,1}^2}z_{i,1}^2(k) +\sum _{o=2}^{n-1}\frac{r_{i,o}d_{i}^2}{8\bar{g}_{i,o}^2}z_{i,o}^2(k) +\frac{d_{i}^2}{6\bar{g}_{i,n}^2}z_{i,n}^2(k) +\sum _{o=1}^{n}\frac{d_{i}^2}{l_{i,o}}\tilde{w}_{i,o}^{T}(k) \tilde{w}_{i,o}(k)+\sum _{j\in \mathcal{N} _{i}}\frac{1}{L_{i}}\left(\tilde{\delta }_{j}^{i}\right)^2(k). \end{aligned} $$(26)

The difference of (26) can be calculated as

Δ V i r i , 1 8 g ¯ i , 1 2 z i , 1 2 ( k ) o = 2 n 1 ( r i , o 8 g ¯ i , o 2 r i , o 1 2 ) d i 2 z i , o 2 ( k ) ( r i , n 6 g ¯ i , n 2 r i , n 1 2 ) d i 2 z i , n 2 ( k ) + o = 1 n [ r i , o 2 g ¯ i , o 2 ( d i m ¯ i , o Δ ¯ i ) 2 + r i , o 2 ( d i ε ¯ i , o ) 2 + r i , o 2 ( d i w ~ i , o T ( k ) ϕ i , o ( X i , o ) ) 2 ] + o = 1 n [ 2 d i , o 2 w ~ i , o T ( k ) ϕ i , o ( k i , o z i , o ( k ) + w ̂ i , o T ( k ) ϕ i , o ) + w ̂ i , o T ( k ) ϕ i , o ) 2 + d i , o 2 l i , o ϕ i , o 2 ( k i , o z i , o ( k ) ] j N i 2 δ ~ j i ( K i z i , 1 ( k ) + δ ̂ j i ( k ) ) + j N i L i ( K i z i , 1 ( k ) + δ ̂ j i ( k ) ) 2 + j N i 4 L i ( δ ¯ j i ) 2 . $$ \begin{aligned} \mathrm{\Delta }{V}_{i}&\le -\frac{r_{i,1}}{8\bar{g}_{i,1}^2}z_{i,1}^{2}(k) -\sum _{o=2}^{n-1}\left(\frac{r_{i,o}}{8\bar{g}_{i,o}^2}-\frac{r_{i,o-1}}{2}\right)d_{i}^{2} z_{i,o}^{2}(k)-\left(\frac{r_{i,n}}{6\bar{g}_{i,n}^2}-\frac{r_{i,n-1}}{2}\right)d_{i}^2z_{i,n}^{2} (k) \nonumber \\&\quad +\sum _{o=1}^{n}\left[\frac{r_{i,o}}{2\bar{g}_{i,o}^2} \left(d_{i}\bar{m}_{i,o}\bar{\mathrm{\Delta }}_{i}\right)^2 +\frac{r_{i,o}}{2}\left(d_{i}\bar{\varepsilon }_{i,o}\right)^2 +\frac{r_{i,o}}{2}\left(d_{i}\tilde{w}_{i,o}^{T}(k)\phi _{i,o} \left(X_{i,o}\right)\right)^2\right]\nonumber \\&\quad +\sum _{o=1}^{n}\left[-2d_{i,o}^2\tilde{w}_{i,o}^{T}(k)\phi _{i,o}\left(k_{i,o}z_{i,o}(k)+\hat{w}_{i,o}^{T}(k)\phi _{i,o}\right) +\hat{w}_{i,o}^{T}(k)\phi _{i,o}\right)^2\nonumber \\&\quad +d_{i,o}^2l_{i,o}\left\Vert\phi _{i,o}\right\Vert^2\left(k_{i,o}z_{i,o}(k)\right] -\sum _{j\in \mathcal{N} _{i}}2\tilde{\delta }_{j}^{i}\left(K_{i}z_{i,1}(k)+\hat{\delta }_{j}^{i}(k)\right)\nonumber \\&\quad +\sum _{j\in \mathcal{N} _{i}}L_{i}\left(K_{i}z_{i,1}(k)+\hat{\delta }_{j}^{i}(k)\right)^2+\sum _{j\in \mathcal{N} _{i}}\frac{4}{L_{i}}\left(\bar{\delta }_{j}^{i}\right)^2. \end{aligned} $$(27)

By choosing 0 <  li, β <  1 and 0 <  Li <  1, the subsequent inequality is valid

o = 1 n [ 2 d i 2 w ~ i , o T ( k ) ϕ i , o ( k i , o z i , o ( k ) + w ̂ i , o T ( k ) ϕ i , o ) + d i 2 l i , o ϕ i , o 2 ( k i , o z i , o ( k ) + w ̂ i , o T ( k ) ϕ i , o ) 2 ] = o = 1 n [ 2 d i 2 ( w ~ i , o T ( k ) ϕ i , o ) 2 2 w ~ i , o T ( k ) ϕ i , o ( k i , o z i , o ( k ) + w i , o T ( k ) ϕ i , o ) + d i 2 ϕ i , o 2 ( k i , o z i , o ( k ) + w ~ i , o T ( k ) ϕ i , o + w i , o T ( k ) ϕ i , o ) 2 ( 1 l i , o ) d i 2 ϕ i , o 2 ( k i , o z i , o ( k ) + w ~ i , o T ( k ) ϕ i , o + w i , o T ( k ) ϕ i , o ) 2 ] o = 1 n [ d i 2 ( w ~ i , o T ( k ) ϕ i , o ) 2 + 2 d i 2 w ¯ i , o 2 + 2 d i 2 k i , o 2 z i , o 2 ( k ) ] . $$ \begin{aligned}&\sum _{o=1}^{n}\left[-2d_{i}^2\tilde{w}_{i,o}^{T}(k)\phi _{i,o} \left(k_{i,o}z_{i,o}(k)+\hat{w}_{i,o}^{T}(k)\phi _{i,o}\right)\right.\nonumber \\&\qquad +\left.d_{i}^2l_{i,o}\left\Vert\phi _{i,o}\right\Vert^2\left(k_{i,o}z_{i,o}(k)+\hat{w}_{i,o}^{T}(k)\phi _{i,o}\right)^2\right]\nonumber \\&\qquad =\sum _{o=1}^{n}\left[-2d_{i}^2\left(\tilde{w}_{i,o}^{T}(k)\phi _{i,o}\right)^2 -2\tilde{w}_{i,o}^{T}(k)\phi _{i,o}\left(k_{i,o}z_{i,o}(k)+w_{i,o}^{T}(k)\phi _{i,o}\right)\right.\nonumber \\&\qquad \quad +d_{i}^2\left\Vert\phi _{i,o}\right\Vert^2\left(k_{i,o}z_{i,o}(k)+\tilde{w}_{i,o}^{T}(k)\phi _{i,o}+w_{i,o}^{T}(k)\phi _{i,o}\right)^2 \nonumber \\&\qquad \quad -\left.\left(1-l_{i,o}\right)d_{i}^2\left\Vert\phi _{i,o}\right\Vert^2 \left(k_{i,o}z_{i,o}(k)+\tilde{w}_{i,o}^{T}(k)\phi _{i,o}+w_{i,o}^{T}(k)\phi _{i,o}\right)^2\right]\nonumber \\&\qquad \le \sum _{o=1}^{n}\left[-d_{i}^2\left(\tilde{w}_{i,o}^{T}(k)\phi _{i,o}\right)^2 +2d_{i}^2\bar{w}_{i,o}^2+2d_{i}^2k_{i,o}^2z_{i,o}^2(k)\right]. \end{aligned} $$(28)

Similarly,

2 δ ~ j i ( K i z i , 1 ( k ) + δ ̂ j i ( k ) ) + L i ( K i z i , 1 ( k ) + δ ̂ j i ( k ) ) 2 ( δ ~ j i ) 2 + 2 K i 2 z i , 1 2 + 2 ( δ ¯ j i ) 2 . $$ \begin{aligned}&-2\tilde{\delta }_{j}^{i}\left(K_{i}z_{i,1}(k)+\hat{\delta }_{j}^{i}(k)\right) +L_{i}\left(K_{i}z_{i,1}(k)+\hat{\delta }_{j}^{i}(k)\right)^2 \le -\left(\tilde{\delta }_{j}^{i}\right)^2 +2K_{i}^{2}z_{i,1}^{2}+2\left(\bar{\delta }_{j}^{i}\right)^2. \end{aligned} $$(29)

Furthermore, ΔVi can be rewritten as

Δ V i ( r i , 1 8 g ¯ i , 1 2 2 k i , 1 2 2 K i 2 ) z i , 1 2 ( k ) o = 2 n 1 ( r i , o d i 2 8 g ¯ i , o 2 r i , o 1 d i 2 2 2 k i , o 2 d i 2 ) z i , o 2 ( k ) ( r i , n d i 2 6 r i , n 1 d i 2 2 2 k i , n 2 d i 2 ) z i , n 2 ( k ) + o = 1 n [ r i , o 2 ( d i m ¯ i , o Δ ¯ i ) 2 + r i , o 2 ( d i ε ¯ i , o ) 2 + 2 d i 2 w ¯ i , o 2 ] o = 1 n ( 1 r i , o 2 ) d i 2 ( w ~ i , o T ( k ) ϕ i , o ) 2 j N i ( δ ~ j i ( k ) ) 2 + j N i ( 2 + 4 L i ) ( δ ¯ j i ) 2 . $$ \begin{aligned} \mathrm{\Delta }{V}_{i}&\le -\left(\frac{r_{i,1}}{8\bar{g}_{i,1}^2}-2k_{i,1}^2-2K_{i}^2\right)z_{i,1}^{2}(k) -\sum _{o=2}^{n-1}\left(\frac{r_{i,o}d_{i}^2}{8\bar{g}_{i,o}^2}-\frac{r_{i,o-1}d_{i}^2}{2} \right. \nonumber \\&\quad -\left.2k_{i,o}^2d_{i}^2\right)z_{i,o}^{2}(k)-\left(\frac{r_{i,n}d_{i}^2}{6}-\frac{r_{i,n-1}d_{i}^2}{2}-2k_{i,n}^2d_{i}^2\right)z_{i,n}^{2}(k)\nonumber \\&\quad +\sum _{o=1}^{n}\left[\frac{r_{i,o}}{2}\left(d_{i}\bar{m}_{i,o}\bar{\mathrm{\Delta }}_{i}\right)^2 +\frac{r_{i,o}}{2}\left(d_{i}\bar{\varepsilon }_{i,o}\right)^2 +2d_{i}^2\bar{w}_{i,o}^2\right]\nonumber \\&\quad -\sum _{o=1}^{n}\left(1-\frac{r_{i,o}}{2}\right)d_{i}^2 \left(\tilde{w}_{i,o}^{T}(k)\phi _{i,o}\right)^2-\sum _{j\in \mathcal{N} _{i}}\left(\tilde{\delta }_{j}^{i}(k)\right)^2+\sum _{j\in \mathcal{N} _{i}}\left(2+\frac{4}{L_{i}}\right)\left(\bar{\delta }_{j}^{i}\right)^2. \end{aligned} $$(30)

Based on (30), ΔVi(k)≤0 holds as

| z i , 1 ( k ) | > M i r i , 1 8 g ¯ i , 1 2 2 k i , 1 2 2 K i 2 $$ \begin{aligned} |z_{i,1}(k)|&> \sqrt{\frac{\mathcal{M} _{i}}{\frac{r_{i,1}}{8\bar{g}_{i,1}^2}-2k_{i,1}^2-2K_{i}^2}} \end{aligned} $$(31)

or

| z i , o ( k ) | > M i r i , o d i 2 8 g ¯ i , o 2 r i , o 1 d i 2 2 2 k i , o 2 d i 2 $$ \begin{aligned} |z_{i,o}(k)|&>\sqrt{ \frac{\mathcal{M} _{i}}{\frac{r_{i,o}d_{i}^2}{8\bar{g}_{i,o}^2}-\frac{r_{i,o-1}d_{i}^2}{2}-2k_{i,o}^2d_{i}^2}} \end{aligned} $$(32)

or

| z i , n ( k ) | > M i r i , n d i 2 6 r i , n 1 d i 2 2 2 k i , n 2 d i 2 $$ \begin{aligned} |z_{i,n}(k)|&>\sqrt{ \frac{\mathcal{M} _{i}}{\frac{r_{i,n}d_{i}^2}{6}-\frac{r_{i,n-1}d_{i}^2}{2}-2k_{i,n}^2d_{i}^2}} \end{aligned} $$(33)

where

M i = o = 1 n [ r i , o 2 ( d i m ¯ i , o Δ ¯ i ) 2 + r i , o 2 ( d i ε ¯ i , o ) 2 + 2 d i 2 w ¯ i , o 2 ] + j N i ( 2 + 4 L i ) ( δ ¯ j i ) 2 . $$ \begin{aligned} \mathcal{M} _{i}&=\sum _{o=1}^{n} \left[\frac{r_{i,o}}{2}\left(d_{i}\bar{m}_{i,o}\bar{\mathrm{\Delta }}_{i}\right)^2 +\frac{r_{i,o}}{2}\left(d_{i}\bar{\varepsilon }_{i,o}\right)^2+2d_{i}^2\bar{w}_{i,o}^2\right] +\sum _{j\in \mathcal{N} _{i}}\left(2+\frac{4}{L_{i}}\right)\left(\bar{\delta }_{j}^{i}\right)^2. \end{aligned} $$(34)

As mentioned by the extension theorem in [41], the previous analysis proves that the error zi, 1(k),…,zi, n(k), adaptive laws w ̂ i , 1 ( k ) , , w ̂ i , n ( k ) $ \hat{w}_{i,1}(k),\ldots,\hat{w}_{i,n}(k) $ and δ ̂ i ( k ) $ \hat{\delta}_{i}(k) $ are bounded. Further, it can be observed that ui is bounded. In addition, we derive the bounds of δ ~ j i $ \tilde{\delta}_{j}^{i} $. According to (30), ΔVi(k)≤0 holds as follows

| δ ~ j i ( k ) | > M i . $$ \begin{aligned} \left|\tilde{\delta }_{j}^{i}(k)\right|&>\sqrt{\mathcal{M} _{i}}. \end{aligned} $$(35)

Based on the standard Lyapunov extension theorem, δ ~ j i $ \tilde{\delta}_{j}^{i} $ is bounded.

By representing ej, 𝜘(k)=xj, 𝜘(k)−ξj, 𝜘(k) (𝜘=1, …, n), the error is reworked as

e j , ϰ ( k ) = ( x j , ϰ ( k ) ξ j , ϰ ( k 1 ) ) s j , ϰ ( k ) δ ̂ j i ( k ) = ( x j , ϰ ( k ) ξ j , ϰ ( k 1 ) ) s j , ϰ a ( k ) δ ~ j i ( k ) = Δ j , ϰ ( k ) δ ~ j i ( k ) $$ \begin{aligned} e_{j,\varkappa }(k)&=\left(x_{j,\varkappa }(k)-\xi _{j,\varkappa }(k-1)\right) -s_{j,\varkappa }(k)-\hat{\delta }_{j}^{i}(k)\\&=\left(x_{j,\varkappa }(k)-\xi _{j,\varkappa }(k-1)\right) -s^{a}_{j,\varkappa }(k)-\tilde{\delta }_{j}^{i}(k)\\&=-\mathrm{\Delta }_{j,\varkappa }(k)-\tilde{\delta }_{j}^{i}(k) \end{aligned} $$

where s j , ϰ a ( k ) $ s^{a}_{j,\varkappa}(k) $ denotes the true signal. Thus, the errors ej, 𝜘(k) are bounded.

Furthermore, one obtains

lim t | y i ( k ) y j ( k ) | | z i , 1 ( k ) | + | x i , 1 ( k ) ξ i , 1 ( k ) | + | ξ j , 1 ( k ) x j , 1 ( k ) | | z i , 1 ( k ) | + | e i , 1 ( k ) | + | e j , 1 ( k ) | | z i , 1 ( k ) | + Δ ¯ i + Δ ¯ j + | δ ~ j i ( k ) | . $$ \begin{aligned} \lim _{t\rightarrow \infty }\left|y_{i}(k)-y_{j}(k)\right|&\le \left|z_{i,1}(k)\right|+\left|x_{i,1}(k)-\xi _{i,1}(k)\right| +\left|\xi _{j,1}(k)-x_{j,1}(k)\right|\\&\le \left|z_{i,1}(k)\right|+\left|e_{i,1}(k)\right|+\left|e_{j,1}(k)\right|\\&\le \left|z_{i,1}(k)\right|+\bar{\mathrm{\Delta }}_{i}+\bar{\mathrm{\Delta }}_{j} +\left|\tilde{\delta }_{j}^{i}(k)\right|. \end{aligned} $$

In accordance with the bounds of zi, 1(k) and δ ~ j i ( k ) $ \tilde{\delta}_{j}^{i}(k) $, consensus errors are uniformly bounded. In conclusion, the proof is finished.

5. Simulation results

In this section, two simulation cases are presented to illustrate the effectiveness of the control method. Example 1. A system comprising six single-link manipulators is utilized, with each single-link manipulator being modeled according to [42]

{ d θ d t = ω d ω d t = G i 1 ( u t M g L i sin ( θ ) f d d θ d t ) $$ \begin{aligned} \left\{ \begin{aligned} \frac{\mathrm{d}\theta }{\mathrm{d}t}&=\omega \\ \frac{\mathrm{d}\omega }{\mathrm{d}t}&=G_{i}^{-1}\left(u_{t}-MgL_{i}\sin (\theta )-f_{d}\frac{\mathrm{d}\theta }{\mathrm{d}t}\right) \end{aligned}\right. \end{aligned} $$

where M = 1 kg and Li = 0.1i + 0.5 m. System states are selected as the current angle θ and associated angular velocity ω. Here, let the moment of inertia G i 1 = ( 0.1 i + 0.5 ) 2 $ G_{i}^{-1}=(0.1i+0.5)^2 $ kg ⋅ m2, the viscous friction fd = 2 kg ⋅ m2, and acceleration of gravity g = 9.81 m s−2. The Euler method with a sampling interval Δt = 0.1 s is used to discrete the systems as follows

{ x i , 1 ( k + 1 ) = x i , 1 ( k ) + Δ t x i , 2 ( k ) x i , 2 ( k + 1 ) = G i 1 Δ t u i ( k ) G i 1 M g L i Δ t sin ( x i , 1 ( k ) ) + ( 1 f d G i 1 Δ t ) x i , 2 ( k ) y i ( k ) = x i , 1 ( k ) , i = 1 , , 6 . $$ \begin{aligned} \left\{ \begin{aligned} x_{i,1}(k+1)&=x_{i,1}(k)+\mathrm{\Delta } tx_{i,2}(k)\\ x_{i,2}(k+1)&=G_{i}^{-1}\mathrm{\Delta } tu_{i}(k)-G_{i}^{-1}MgL_{i}\mathrm{\Delta } t\sin (x_{i,1}(k))+ \left(1-f_{d}G_{i}^{-1}\mathrm{\Delta } t\right)x_{i,2}(k) \\ y_{i}(k)&=x_{i,1}(k), i=1,\ldots ,6.\end{aligned}\right. \end{aligned} $$

They are connected to each other through a network, the structure of which is shown in Figure 2. We assume that the false signals 5sin(k) and 5cos(k) are injected into the edges (v1, v6) and (v4, v3).

thumbnail Figure 2.

Communication topology

Choose the parameters as l1 = [0.1, 0.9, 0.1, 0.01, 0.1, 0.1]T, l2 = [0.9, 0.5, 0.5, 0.5, 0.1, 0.1]T, k1 = [0.1, 0.3, 0.5, 0.3, 0.3, 1.3]T, k2 = [0.1, 0.5, 0.1, 0.5, 0.5, 0.5]T. Set the initial value as x1(0) = [0.5,−8.5]T, x2(0) = [−0.3,1.05]T, x3(0) = [1,−10.85]T, x4(0) = [0.2, −2.06]T, x5(0) = [−0.2,2.56]T, x6(0) = [−0.5,6.05]T, w ̂ 11 ( 0 ) = [ 0.1 , 0.0 , 0.1 , 0.1 , 1 ] T $ \hat{w}_{11}\left( 0\right) =[0.1,-0.0,0.1,0.1,-1]^{T} $, w ̂ 21 ( 0 ) = [ 0.01 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{21}\left( 0\right)=[-0.01,0.1,0.1,0.1,0.1]^{T} $, w ̂ 31 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{31}\left( 0\right) =[0.1,0.1,-0.1,0.1,0.1]^{T} $, w ̂ 41 ( 0 ) = w ̂ 51 ( 0 ) = w ̂ 61 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , $ \hat{w}_{41}\left( 0\right) =\hat{w}_{51}\left( 0\right)=\hat{w}_{61}\left( 0\right) =[0.1,-0.1,0.1,-0.1, $ 0.1]T, w ̂ 12 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{12}\left( 0\right) =[0.1,-0.1,0.1,0.1,0.1]^{T} $, w ̂ 22 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{22}\left( 0\right)=[-0.1,0.1,0.1,0.1,0.1]^{T} $, w ̂ 32 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{32}\left( 0\right) =[0.1,0.1,0.1,0.1,-0.1]^{T} $, w ̂ 42 ( 0 ) = w ̂ 52 ( 0 ) = w ̂ 62 ( 0 ) = [ 0.3 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{42}\left( 0\right) =\hat{w}_{52}\left( 0\right)=\hat{w}_{62}\left( 0\right) =[0.3,-0.1,0.1,-0.1,0.1]^{T} $. Figures 39 shows the simulation results.

thumbnail Figure 3.

Outputs yi (i = 1, …, 6) of all agents

thumbnail Figure 4.

Consensus errors yi − yj (i = 1, …, 6)

thumbnail Figure 5.

Controller ui of every agent

thumbnail Figure 6.

The norms of adaptive laws wi, 1 (i = 1, …, 6)

thumbnail Figure 7.

The norms of adaptive laws wi, 2 (i = 1, …, 6)

thumbnail Figure 8.

The input xi, 1 − ξi, 1 and output q(xi, 1 − ξi, 1) of quantizer

thumbnail Figure 9.

The input xi, 2 − ξi, 2 and output q(xi, 2 − ξi, 2) of quantizer

In Figure 3, the outputs of all the agents are plotted. Figure 4 shows the consensus errors. The control input ui, and norms of the adaptive laws wi, 1 and wi, 2 are shown in Figures 57, respectively. From observation, they are bounded. Figures 89 illustrate the inputs and outputs of quantizers.

Example 2. Next, a cooperative example consisting of six unmanned ships is given to further demonstrate the effectiveness of the designed controller. Each unmanned ship model is as follows [43]

T i ψ i ¨ + ψ i ˙ + K i ψ i ˙ 3 = ϕ i ν i $$ \begin{aligned} T_{i}\ddot{\psi _{i}}+\dot{\psi _{i}}+K_{i}\dot{\psi _{i}}^3=\phi _{i}\nu _{i} \end{aligned} $$(36)

with Ti >  0, the Norrbin coefficient Ki, gain ϕi, the course angle ψi, course angle rate r i = ψ ˙ i $ r_{i}=\dot{\psi} _{i} $, and command rudder angle νi. We define the xi, 1 = ψi and xi, 2 = ri, and use the Euler method with the sampling interval Δt = 0.1s to discrete the systems as

{ x i , 1 ( k + 1 ) = x i , 1 ( k ) + Δ t x i , 2 ( k ) x i , 2 ( k + 1 ) = T i 1 Δ t ν i ( k ) T i 1 Δ t x i , 2 ( k ) G i 1 K i Δ t x i , 2 3 ( k ) y i ( k ) = x i , 1 ( k ) , i = 1 , , 6 . $$ \begin{aligned} \left\{ \begin{aligned} x_{i,1}(k+1)&=x_{i,1}(k)+\mathrm{\Delta } tx_{i,2}(k)\\ x_{i,2}(k+1)&=T_{i}^{-1}\mathrm{\Delta } t\nu _{i}(k)-T_{i}^{-1}\mathrm{\Delta } tx_{i,2}(k)-G_{i}^{-1}K_{i}\mathrm{\Delta } tx_{i,2}^{3}(k) \\ y_{i}(k)&=x_{i,1}(k), i=1,\ldots ,6.\end{aligned}\right. \end{aligned} $$

The communication network structure is displayed in Figure 10. Assume that false signals 0.5sin(k) and 0.5cos(k) are injected into edges (v2, v1) and (v5, v4), respectively.

thumbnail Figure 10.

Communication topology

Choose the parameters as l1 = [0.9, 0.9, 0.9, 0.01, 0.9, 0.9]T, l2 = [0.9, 0.5, 0.5, 0.5, 0.5, 0.5]T, k1 = [0.1, 0.1, 1.5, 0.3, 1.1, 0.1]T, k2 = [0.1, 0.1, 0.1, 0.1, 1.5, 0.1]T. Set the initial value as x1(0) = [0.5,−8.5]T, x2(0) = [−0.3,−0.3]T, x3(0) = [1,−13]T, x4(0) = [0.2,−4.8]T, x5(0) = [−0.2,−0.64]T, x6(0) = [−0.5,2.5]T, w ̂ 11 ( 0 ) = [ 0.5 , 0.1 , 0.1 , 0.1 , $ \hat{w}_{11}\left( 0\right) =[0.5,-0.1,0.1,0.1, $ 0.1]T, w ̂ 21 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{21}\left( 0\right)=[-0.1,0.1,0.1,0.1,0.1]^{T} $, w ̂ 31 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{31}\left( 0\right) =[-0.1,-0.1,-0.1,-0.1,-0.1]^{T} $, w ̂ 51 ( 0 ) = [ 0.3 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{51}\left( 0\right) =\left[ 0.3,-0.1,-0.1,-0.1,0.1\right]^{T} $, w ̂ 41 ( 0 ) = w ̂ 61 ( 0 ) = [ 0.3 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{41}\left( 0\right)=\hat{w}_{61}\left( 0\right) =[0.3,-0.1,0.1,-0.1,0.1]^{T} $, w ̂ 12 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{12}\left( 0\right) =[0.1,-0.1,0.1,0.1,0.1]^{T} $, w ̂ 22 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{22}\left( 0\right)=[-0.1,0.1,0.1,0.1,0.1]^{T} $, w ̂ 32 ( 0 ) = [ 0.1 , $ \hat{w}_{32}\left( 0\right) =[0.1, $ 0.1, 0.1, 0.1, −0.1]T, w ̂ 42 ( 0 ) = w ̂ 52 ( 0 ) = w ̂ 61 ( 0 ) = [ 0.3 , 0.1 , 0.1 , 0.1 , 0.1 ] T $ \hat{w}_{42}\left( 0\right) =\hat{w}_{52}\left( 0\right)=\hat{w}_{61}\left( 0\right) =[0.3,-0.1,0.1,-0.1,0.1]^{T} $.

Figures 1115 show the simulation results. In Figure 11, the outputs of all the agents are plotted. Figure 12 draws the consensus errors. The control input ui, and norms of the adaptive laws wi, 1 and wi, 2 are given in Figures 1315, separately. From observation, they are bounded.

thumbnail Figure 11.

Outputs yi (i = 1, …, 6) of all agents

thumbnail Figure 12.

Consensus errors yi − yj (i = 1, …, 6)

thumbnail Figure 13.

Controller ui of every agent

thumbnail Figure 14.

The norms of adaptive laws wi, 1 (i = 1, …, 6)

thumbnail Figure 15.

The norms of adaptive laws wi, 2 (i = 1, …, 6)

Remark 4.

Existing quantization-based coding and decoding techniques are primarily suited for linear multi-agent systems [19, 20] or the controller-to-actuator segment of an agent [35, 36]. They rarely address the complexities of high-order unknown nonlinear multi-agent systems. This scarcity of coverage arises from the inherent difficulty of applying existing quantization-based codec techniques to systems characterized by high-order nonlinearity and unknown variables. Moreover, in the presence of cyber attacks, these conventional techniques fail to ensure system implementation. Similarly, when facing network attacks, these methods are incapable of ensuring system consistency. Hence, the challenge lies in adapting quantization-based codec techniques to high-order unknown nonlinear MUSs within the context of FDI attacks.

6. Conclusion

This study focuses on dealing with the distributed cooperative control issue in unknown nonlinear discrete-time MUSs by utilizing limited communication resources within a backstepping framework. The communication between agents is facilitated using a quantizer-based codec mechanism, which addresses the limitations of communication bandwidth and ensures efficient transmission. Furthermore, FDI attacks are considered in the communication channels among agents, prompting the use of an adaptive method to counter the detected attacks. Employing the Lyapunov analysis technique, the developed control strategy guarantees the boundedness of all signals through appropriate parameter selection. Finally, examples are provided to illustrate the feasibility and effectiveness of the proposed scheme. In future work, the scope of the study will be extended to encompass the distributed cooperative control of nonlinear continuous-time MUSs using a finite-level quantizer-based codec mechanism.

Conflict of Interest

The author declares no conflict of interest.

Data Availability

No data are associated with this article.

Authors’ Contributions

Yanhui Zhang designed the control scheme and wrote this paper. Di Mei discussed the recent development. Yong Xu and Lihua Dou supervised and corrected typos in the paper.

Acknowledgments

We thank the Editor-in-Chief, the Associate Editor, and the anonymous reviewers for their insightful and constructive comments.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant U20B2073, Grant 62103047, Beijing Institute of Technology Research Fund Program for Young Scholars, and Young Elite Scientists Sponsorship Program by BAST (Grant No. BYESS2023365).

References

  1. Watts AC, Ambrosia VG and Hinkley EA. Unmanned aircraft systems in remote sensing and scientific research: classification and considerations of use. Remote Sens 2012; 4: 1671–92. [CrossRef] [Google Scholar]
  2. Verfuss UK, Aniceto AS, Harris DV et al. A review of unmanned vehicles for the detection and monitoring of marine fauna. Mar Pollut Bull 2019; 140: 17–29. [CrossRef] [PubMed] [Google Scholar]
  3. Olfati-Saber R and Murray RM. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans Autom Control 2004; 49: 1520–33. [CrossRef] [Google Scholar]
  4. Yang R, Liu L and Feng G. An overview of recent advances in distributed coordination of multi-agent systems. Unmanned Syst 2022; 10: 307–25. [CrossRef] [Google Scholar]
  5. Shan H, Xue H, Hu S et al. Finite-time dynamic surface control for multi-agent systems with prescribed performance and unknown control directions. Int J Syst Sci 2022; 53: 325–36. [CrossRef] [Google Scholar]
  6. Zhang J, Liu S and Zhang X. Observer-based distributed consensus for nonlinear multi-agent systems with limited data rate. Sci Chin Inf Sci 2022; 65 192204. [Google Scholar]
  7. Zhu Y, Wang Z, Liang H et al. Neural-network-based predefined-time adaptive consensus in nonlinear multi-agent systems with switching topologies. IEEE Trans Neural Networks Learn Syst 2023, doi: http://doi.org/10.1109/TNNLS.2023.3238336. [Google Scholar]
  8. Chen J, Xie J, Li J et al. Human-in-the-loop fuzzy iterative learning control of consensus for unknown mixed-order nonlinear multi-agent systems. IEEE Trans Fuzzy Syst 2023, doi: 10.1109/TFUZZ.2023.3296572. [Google Scholar]
  9. Zhang H, Jiang H, Luo Y et al. Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method. IEEE Trans Ind Electron 2016; 64: 4091–100. [Google Scholar]
  10. Li S, Zhang J, Li X et al. Formation control of heterogeneous discrete-time nonlinear multi-agent systems with uncertainties. IEEE Trans Ind Electron 2017; 64: 4730–40. [CrossRef] [Google Scholar]
  11. Jiang Y, Fan J, Gao W et al. Cooperative adaptive optimal output regulation of nonlinear discrete-time multi-agent systems. Automatica 2020; 121: 109149. [CrossRef] [Google Scholar]
  12. Li H and Li X. Distributed fixed-time consensus of discrete-time heterogeneous multi-agent systems via predictive mechanism and Lyapunov approach. IEEE Trans Circuits Syst II: Express Briefs 2023, doi: 10.1109/TCSII.2023.3302136. [Google Scholar]
  13. Fu M and Xie L. The sector bound approach to quantized feedback control. IEEE Trans Autom Control 2005; 50: 1698–711. [CrossRef] [Google Scholar]
  14. Hayakawa T, Ishii H and Tsumura K. Adaptive quantized control for linear uncertain discrete-time systems. Automatica 2009; 45: 692–700. [CrossRef] [Google Scholar]
  15. Nesic D and Liberzon D. A unified framework for design and analysis of networked and quantized control systems. IEEE Trans Autom Control 2009; 54: 732–47. [CrossRef] [Google Scholar]
  16. Zhang Y, Zhang J, Liu X et al. Quantized-output feedback model reference control of discrete-time linear systems. Automatica 2022; 137: 110027. [CrossRef] [Google Scholar]
  17. Liu G, Pan Y, Lam HK et al. Event-triggered fuzzy adaptive quantized control for nonlinear multi-agent systems in nonaffine pure-feedback form. Fuzzy Sets Syst 2021; 416: 27–46. [CrossRef] [Google Scholar]
  18. Liu W, Ma Q, Xu S et al. State quantized output feedback control for nonlinear systems via event-triggered sampling. IEEE Trans Autom Control 2022; 67: 6810–17. [CrossRef] [Google Scholar]
  19. Carli R, Fagnani F, Frasca P et al. Efficient quantized techniques for consensus algorithms. NeCST workshop 2007; 1–8. [Google Scholar]
  20. Li T and Xie L. Distributed consensus over digital networks with limited bandwidth and time-varying topologies. Automatica 2011; 47: 2006–15. [CrossRef] [Google Scholar]
  21. Dong W. Consensus of high-order nonlinear continuous-time systems with uncertainty and limited communication data rate. IEEE Trans Autom Control 2009; 64: 2100–2107. [Google Scholar]
  22. Zhang H, Chi R, Hou Z et al. Data-driven iterative learning control using a uniform quantizer with an encoding–decoding mechanism. Int J Robust Nonlinear Control 2022; 32: 4336–4354. [CrossRef] [Google Scholar]
  23. Ren H, Liu R, Cheng Z et al. Data-driven event-triggered control for nonlinear multi-agent systems with uniform quantization. IEEE Trans Circuits Syst II: Express Briefs 2023, doi: 10.1109/TCSII.2023.3305946. [Google Scholar]
  24. Zhu P, Jin S, Bu X et al. Distributed data-driven control for a connected heterogeneous vehicle platoon under quantized and switching topologies communication. IEEE Trans Veh Technol 2023; 72: 9796–807. [CrossRef] [Google Scholar]
  25. Wang P, Ren X, Hu S et al. Event-based adaptive compensation control of nonlinear cyber-physical systems under actuator failure and false data injection attack. In: 2021 40th Chinese Control Conference. IEEE, 2021, 509–14. [CrossRef] [Google Scholar]
  26. Guo H, Sun J and Pang Z. Residual-based false data injection attacks against multi-sensor estimation systems. IEEE/CAA J Autom Sinica 2023;in press, doi: 10.1109/JAS.2023.123441. [Google Scholar]
  27. Li X, Zhou Q, Li P et al. Event-triggered consensus control for multi-agent systems against false data-injection attacks. IEEE Trans Cybern 2020; 50: 1856–66. [CrossRef] [PubMed] [Google Scholar]
  28. Meng M, Xiao G, Li B. Adaptive consensus for heterogeneous multi-agent systems under sensor and actuator attacks. Automatica 2020; 122: 109242. [CrossRef] [Google Scholar]
  29. Zhang S, Che W and Deng C. Observer-based event-triggered secure synchronization control for multi-agent systems under false data injection attacks. Int J. Robust Nonlinear Control 2022; 32: 4843–60. [CrossRef] [Google Scholar]
  30. Wang Z, Shi S, He W et al. Observer-based asynchronous event-triggered bipartite consensus of multi-agent systems under false data injection attacks. IEEE Trans Control Netw Syst 2023; in press, doi: 10.1109/TCNS.2023.3235425. [Google Scholar]
  31. Huo S, Huang D, Zhang Y. Secure output synchronization of heterogeneous multi-agent systems against false data injection attacks. Sci Chin Inf Sci 2022; 65: 162204. [CrossRef] [Google Scholar]
  32. Hu Z, Chen K, Deng F et al. H controller design for networked systems with two-channel packet dropouts and FDI attacks. IEEE Trans Cybern 2023, doi: 10.1109/TCYB.2022.3233065. [PubMed] [Google Scholar]
  33. Wei B, Tian E, Gu Z et al. Quasi-consensus control for stochastic multiagent systems: when energy harvesting constraints meet multimodal FDI attacks. IEEE Trans Cybern 2023, doi: 10.1109/TCYB.2023.3253141. [PubMed] [Google Scholar]
  34. Tahoun A and Arafa M. Cooperative control for cyber-physical multi-agent networked control systems with unknown false data-injection and replay cyber-attacks. ISA Trans 2021; 110: 1–14. [CrossRef] [PubMed] [Google Scholar]
  35. Cao L, Ren H, Li H et al. Event-triggered output-feedback control for large-scale systems with unknown hysteresis. IEEE Trans Cybern 2020; 51: 5236–47. [Google Scholar]
  36. Chen G, Yao D, Li H et al. Saturated threshold event-triggered control for multiagent systems under sensor attacks and its application to UAVs. IEEE Trans Circuits Syst I Regul Pap 2021; 69: 884–95. [Google Scholar]
  37. Hashemi M, Javad A, Jafar G et al. Robust adaptive actuator failure compensation for a class of uncertain nonlinear systems. Int J Autom Comput 2017; 14: 719–28. [CrossRef] [Google Scholar]
  38. Liu H and Zhang T. Adaptive neural network finite-time control for uncertain robotic manipulators. J Int Robotic Syst 2014; 75: 363–77. [CrossRef] [Google Scholar]
  39. Du J, Guo C, Yu S et al. Adaptive autopilot design of time-varying uncertain ships with completely unknown control coefficient. IEEE J Oceanic Eng 2007; 32: 346–52. [CrossRef] [Google Scholar]
  40. Park J and Sandberg IW. Universal approximation using radial-basis-function networks. Neural Comput 1991; 3: 246–57. [CrossRef] [PubMed] [Google Scholar]
  41. Jagannathan S, Vandegrift MW and Lewis FL. Adaptive fuzzy logic control of discrete-time dynamical systems. Automatica 2000; 36: 229–41. [CrossRef] [Google Scholar]
  42. Ma H and Yang GH. Simultaneous fault diagnosis for robot manipulators with actuator and sensor faults. Inf Sci 2016; 366: 12–30. [CrossRef] [Google Scholar]
  43. Tzeng C, Goodwin G and Crisafulli S. Feedback linearization design of a ship steering autopilot with saturating and slew rate limiting actuator. Int J Adap Control Signal Process 1999; 13: 23–30. [CrossRef] [Google Scholar]
Yanhui Zhang

Yanhui Zhang received a B.S. degree and an M.S. degree in Applied Mathematics from Bohai University, Jinzhou, China, in 2016 and 2019, respectively. She received her Ph.D. degree in the School of Automation, Beijing Institute of Technology in 2023. Her current research interests include fuzzy adaptive control, multi-agent systems, and nonlinear systems.

Di Mei

Di Mei received a B.Eng. degree in automation from Beijing Institute of Technology, Beijing, China, in 2015, and an M.Eng. degree in control engineering from Beijing Information Science and Technology University, Beijing, China, in 2018. He is currently pursuing a Ph.D. degree from the College of Automation, Beijing Institute of Technology, Beijing, China. His current research interest includes multi-agent systems, distributed cooperative control, and reinforcement learning.

Yong Xu

Yong Xu received a Ph.D. degree in control science and engineering from Zhejiang University, Hangzhou, China, in 2020. He held a post-doctoral position supervised by Prof. Jian Sun at the State Key Lab of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology from 2020 to 2022. In July 2022, he joined the school of school of Automation at Beijing Institute of Technology, Beijing, where he is currently an Associate Profes- sor. His current research interest includes multi-agent systems, Reinforcement learning/data-driven control, security analysis, and control of networked systems.

Lihua Dou

Lihua Dou received B.S., M.S., and Ph.D. degrees in control theory and control engineering from Beijing Institute of Technology, Beijing, China, in 1979, 1987, and 2001, respectively. She is currently a professor of control science and engineering with the Key Laboratory of Complex System Intelligent Control and Decision, School of Automation, Beijing Institute of Technology. Her research interests include multi-objective optimization and decision, pattern recognition, and image processing.

All Figures

thumbnail Figure 1.

Block diagram of design process

In the text
thumbnail Figure 2.

Communication topology

In the text
thumbnail Figure 3.

Outputs yi (i = 1, …, 6) of all agents

In the text
thumbnail Figure 4.

Consensus errors yi − yj (i = 1, …, 6)

In the text
thumbnail Figure 5.

Controller ui of every agent

In the text
thumbnail Figure 6.

The norms of adaptive laws wi, 1 (i = 1, …, 6)

In the text
thumbnail Figure 7.

The norms of adaptive laws wi, 2 (i = 1, …, 6)

In the text
thumbnail Figure 8.

The input xi, 1 − ξi, 1 and output q(xi, 1 − ξi, 1) of quantizer

In the text
thumbnail Figure 9.

The input xi, 2 − ξi, 2 and output q(xi, 2 − ξi, 2) of quantizer

In the text
thumbnail Figure 10.

Communication topology

In the text
thumbnail Figure 11.

Outputs yi (i = 1, …, 6) of all agents

In the text
thumbnail Figure 12.

Consensus errors yi − yj (i = 1, …, 6)

In the text
thumbnail Figure 13.

Controller ui of every agent

In the text
thumbnail Figure 14.

The norms of adaptive laws wi, 1 (i = 1, …, 6)

In the text
thumbnail Figure 15.

The norms of adaptive laws wi, 2 (i = 1, …, 6)

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.